[Mesa-dev] [PATCH 8/8] panfrost: Add backend targeting the DRM driver

Tomeu Vizoso tomeu.vizoso at collabora.com
Tue Mar 5 07:14:28 UTC 2019


On 3/5/19 3:29 AM, Dave Airlie wrote:
> On Tue, 5 Mar 2019 at 12:20, Kristian Høgsberg <hoegsberg at gmail.com> wrote:
>>
>> On Mon, Mar 4, 2019 at 6:11 PM Alyssa Rosenzweig <alyssa at rosenzweig.io> wrote:
>>>
>>>> Why aren't we using regular dma-buf fences here? The submit ioctl
>>>> should be able to take a number of in fences to wait on and return an
>>>> out fence if requested.
>>>
>>> Ah-ha, that sounds like the "proper" approach for mainline. Much of this
>>> was (incorrectly) inherited from the Arm driver. Thank you for the
>>> pointer.
>>
>> I'm not sure - I mean, the submit should take in/out fences, but the
>> atom mechanism here sounds more like it's for declaring the
>> dependencies between multiple batches in a renderpass/frame to allow
>> the kernel to shcedule them? The sync fd may be a little to heavy
>> handed for that, and if you want to express that kind of dependency to
>> allow the kernel to reschedule, maybe we need both?
> 
> You should more likely be using syncobjects, not fences.

Yeah, so the dependency is currently expressed by the ID of the atom it 
depends on. This is needed in the current approach because at submit time 
we cannot have a fence yet for the dependency if both atoms are in the 
same submit.

Alyssa: do you see any problem if we change to submit only one atom per 
ioctl?

Then we would get a syncobj for the first atom that we could pass as an 
in-fence for any dependencies, in separate ioctls.

> You can convert syncobjs to fences, but fences consume an fd which you
> only really want if inter-device.

Guess syncobj refs are akin to GEM handles and fences to dmabuf buffers 
from the userspace POV?

Thanks,

Tomeu


More information about the mesa-dev mailing list