[RFC 0/6] drm/fences: add in-fences to DRM

Inki Dae inki.dae at samsung.com
Thu Mar 24 23:49:21 UTC 2016



2016년 03월 25일 00:40에 Rob Clark 이(가) 쓴 글:
> On Thu, Mar 24, 2016 at 4:18 AM, Inki Dae <inki.dae at samsung.com> wrote:
>> Hi,
>>
>> 2016년 03월 24일 03:47에 Gustavo Padovan 이(가) 쓴 글:
>>> From: Gustavo Padovan <gustavo.padovan at collabora.co.uk>
>>>
>>> Hi,
>>>
>>> This is a first proposal to discuss the addition of in-fences support
>>> to DRM. It adds a new struct to fence.c to abstract the use of sync_file
>>> in DRM drivers. The new struct fence_collection contains a array with all
>>> fences that a atomic commit needs to wait on
>>
>> As I mentioned already like below,
>> http://www.spinics.net/lists/dri-devel/msg103225.html
>>
>> I don't see why Android specific thing is tried to propagate to Linux DRM. In Linux mainline, it has already implicit sync interfaces for DMA devices called dma fence which forces registering a fence obejct to DMABUF through a reservation obejct when a dmabuf object is created. However, Android sync driver creates a new file for a sync object and this would have different point of view.
>>
>> Is there anyone who can explan why Android specific thing is tried to spread into Linux DRM? Was there any consensus to use Android sync driver - which uses explicit sync interfaces - as Linux standard?
>>
> 
> btw, there is already plane_state->fence .. which I don't think has
> any users yet, but I start to use it in my patchset that converts
> drm/msm to 'struct fence'

Yes, Exynos also started it.

> 
> That said, we do need syncpt as the way to expose fences to userspace
> for explicit synchronization, but I'm not entirely sure that the

It's definitely different case. This tries to add new user-space interfaces to expose fences to user-space. At least, implicit interfaces are embedded into drivers.
So I'd like to give you a question. Why exposing fences to user-space is required? To provide easy-to-debug solution to rendering pipleline? To provide merge fence feature? 

And if we need really to expose fences to user-space and there is really real user, then we have already good candidates, DMA-BUF-IOCTL-SYNC or maybe fcntl system call because we share already DMA buffer between CPU <-> DMA and DMA <-> DMA using DMABUF.
For DMA-BUF-IOCTL-SYNC, I think you remember that was what I tried long time ago because you was there. Several years ago, I tried to couple exposing the fences to user-space with cache operation although at that time, I really misleaded the fence machnism. My trying was also for the potential users.

Anyway, my opinion is that we could expose the fences hided by DMABUF to user-space using interfaces it exists already around us. And for this, below Chromium solution would also give us some helps,
https://chromium.googlesource.com/chromiumos/third_party/kernel/+/chromeos-3.18/drivers/gpu/drm/drm_sync_helper.c

And in /driver/dma-buf/, there are DMABUF-centric modules so looks strange sync_file module of Android is placed in that directory - Android sync driver doesn't use really DMABUF but creates new file for their sync fence instead. 
For implicit sync interfaces for DMA devices, we use DMABUF and for explict sync interfaces for user-space, we use sync_file not DMABUF? That doesn't make sense. 

I love really Android but I feel as if we try to give a seat available to Android somehow.

Thanks,
Inki Dae

> various drivers ever need to see that (vs just struct fence), at least
> on the kms side of things.
> 
> BR,
> -R
> 
> 
>> Thanks,
>> Inki Dae
>>
>>>
>>> /**
>>>  * struct fence_collection - aggregate fences together
>>>  * @num_fences: number of fence in the collection.
>>>  * @user_data: user data.
>>>  * @func: user callback to put user data.
>>>  * @fences: array of @num_fences fences.
>>>  */
>>> struct fence_collection {
>>>        int num_fences;
>>>        void *user_data;
>>>        collection_put_func_t func;
>>>        struct fence *fences[];
>>> };
>>>
>>>
>>> The fence_collection is allocated and filled by sync_file_fences_get() and
>>> atomic_commit helpers can use fence_collection_wait() to wait the fences to
>>> signal.
>>>
>>> These patches depends on the sync ABI rework:
>>>
>>> https://www.spinics.net/lists/dri-devel/msg102795.html
>>>
>>> and the patch to de-stage the sync framework:
>>>
>>> https://www.spinics.net/lists/dri-devel/msg102799.html
>>>
>>>
>>> I also hacked together some sync support into modetest for testing:
>>>
>>> https://git.collabora.com/cgit/user/padovan/libdrm.git/log/?h=atomic
>>>
>>>
>>>       Gustavo
>>>
>>>
>>> Gustavo Padovan (6):
>>>   drm/fence: add FENCE_FD property to planes
>>>   dma-buf/fence: add struct fence_collection
>>>   dma-buf/sync_file: add sync_file_fences_get()
>>>   dma-buf/fence: add fence_collection_put()
>>>   dma-buf/fence: add fence_collection_wait()
>>>   drm/fence: support fence_collection on atomic commit
>>>
>>>  drivers/dma-buf/fence.c             | 33 +++++++++++++++++++++++++++++++++
>>>  drivers/dma-buf/sync_file.c         | 36 ++++++++++++++++++++++++++++++++++++
>>>  drivers/gpu/drm/drm_atomic.c        | 13 +++++++++++++
>>>  drivers/gpu/drm/drm_atomic_helper.c | 10 ++++++----
>>>  drivers/gpu/drm/drm_crtc.c          |  7 +++++++
>>>  include/drm/drm_crtc.h              |  5 ++++-
>>>  include/linux/fence.h               | 19 +++++++++++++++++++
>>>  include/linux/sync_file.h           |  8 ++++++++
>>>  8 files changed, 126 insertions(+), 5 deletions(-)
>>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> 


More information about the dri-devel mailing list