[PATCH 0/9] drm/syncobj: Add full-featured wait support (v2)

Christian König christian.koenig at amd.com
Mon Aug 14 07:36:14 UTC 2017


Am 14.08.2017 um 01:14 schrieb Jason Ekstrand:
> On August 13, 2017 8:52:21 AM Christian König 
> <christian.koenig at amd.com> wrote:
>
>> Am 13.08.2017 um 17:26 schrieb Jason Ekstrand:
>>> On August 13, 2017 6:19:53 AM Christian König
>>> <christian.koenig at amd.com> wrote:
>>>
>>>> Patches #1-#4 are Acked-by: Christian König 
>>>> <christian.koenig at amd.com>.
>>>>
>>>> Patch #5: NAK, that will break radeon.
>>>>
>>>> On radeon we need the non-default wait or otherwise we can run into a
>>>> situation where we never signal a fence.
>>>>
>>>> The general question is why do you need this?
>>>
>>> Because i915 sets a non-default wait function so calling wait_any just
>>> bails with fences from i915 immediately bails with -EINVAL. This makes
>>> it work even with non-default waits.
>>
>> Ok well, let me refine the question: Why does i915 sets a non-default
>> wait function?
>
> I have no idea.

Can you figure that out? I'm not completely against removing that 
limitation, but it would be a lot cleaner if we can just fix i915 to not 
set a non-default wait function.

>> In radeon we have it because we need to handle 10+ years of different
>> hardware generation, each which a bunch of separate bugs in their fence
>> handling (and some even not solved by today).
>
> So how does wait_any returning -EINVAL for non-default waits help radeon?

When the wait function detects a problem it reports -EDEADLK to the 
caller to signal that the hardware is stuck.

The Caller then goes up the call chain and ultimately reports this to 
the IOCTL where the error is handled with a hardware reset. And yeah, I 
perfectly know that this design sucks badly.

The point is we should try to prevent that the wait for any function is 
even used together with a radeon fence.

>
>
>>>> Patch #6: Yes, please. Patch is Reviewed-by: Christian König
>>>> <christian.koenig at amd.com>.
>>>>
>>>> Patch #7: Already gave my rb on the patch Chris send out earlier.
>>>>
>>>> Patch #8: NAK to the whole approach.
>>>>
>>>> IIRC we discussed a very similar thing during the initial fence 
>>>> bringup
>>>> and also during the fence_array development.
>>>>
>>>> The problem is that you can easily build ring dependencies and so
>>>> deadlocks with it.
>>>>
>>>> I would really prefer an approach which is completely contained inside
>>>> the syncobj code base.
>>>
>>> Are you use to the approach of internally making a proxy so long as
>>> all the proxy code is inside syncobj?
>> Yes, that would be a start.
>>
>> In general if possible I would rather like to avoid the whole handling
>> with the proxy/callback altogether, but that possible only works with
>> wait for any if the waitqueue is global and that wouldn't be ideal 
>> either.
>
> I'm happy to get rid of the proxies.  They did work nicely but I'm not 
> really attached to them
>
>>> Is also be happy to go back to the original approach with v3 of the
>>> last patch.
>> v3 looked like it should work as well, I would just drop abusing the
>> fence callback structure for the signaling.
>>
>> Ideally we would finally come up with an interface to wait for multiple
>> waitqueue at the same time, but that probably goes a bit to far.
>>
>> For now just use a single linked list to start all processes waiting for
>> a fence to arrive or something like this.
>
> I don't really know what you're suggesting.  Patch v3 has a single 
> waitqueue per process.  Are you suggesting one per fence?

Yeah, more or less. What I'm suggesting is to use one wait_queue_head_t 
per drm_syncobj.

See a wait_queue is a callback mechanism anyway, so you are wrapping a 
callback mechanism inside another callback mechanism and that makes not 
really much sense.

The problem is that we don't have a wait_event_* variant which can wait 
for multiple events. Somebody should really add something like that :)

Regards,
Christian.

>
> --Jason
>
>> Regards,
>> Christian.
>>
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>> Am 12.08.2017 um 00:39 schrieb Jason Ekstrand:
>>>>> This series does the same thing as my earlier series in that it adds
>>>>> a sync
>>>>> object wait interface complete with WAIT_FOR_SUBMIT flag. While the
>>>>> uapi
>>>>> remains unchanged, the guts look a bit different.  Instead of 
>>>>> adding a
>>>>> callback mechanism to drm_syncobj that fired whenever 
>>>>> replace_fence was
>>>>> called, it's now using proxy fences.  The drm_syncobj_fence_get still
>>>>> returns NULL whenever the sync object is in an unsubmitted state but
>>>>> there
>>>>> is a new drm_syncobj_fence_proxy_get which returns either the real
>>>>> fence or
>>>>> a proxy fence that will be triggered the next time replace_fence is
>>>>> called
>>>>> with a non-NULL replacement.  This does make both
>>>>> drm_syncobj_fence_get and
>>>>> drm_syncobj_replace_fence a tiny bit more expensive, but it lets us
>>>>> do it
>>>>> all without locking.
>>>>>
>>>>> This series can be found as a branch here:
>>>>>
>>>>> https://cgit.freedesktop.org/~jekstrand/linux/log/?h=drm-syncobj-wait-submit-v4 
>>>>>
>>>>>
>>>>>
>>>>> IGT tests for DRM_IOCTL_SYNCOBJ_WAIT and DRM_IOCTL_SYNCOBJ_RESET 
>>>>> can be
>>>>> found on patchwork here:
>>>>>
>>>>> https://patchwork.freedesktop.org/series/28666/
>>>>>
>>>>> Patches to the Intel Vulkan driver to implement
>>>>> VK_KHR_external_fence on
>>>>> top of this kernel interface can be found here:
>>>>>
>>>>> https://cgit.freedesktop.org/~jekstrand/mesa/log/?h=wip/anv-external-fence 
>>>>>
>>>>>
>>>>>
>>>>> Cc: Dave Airlie <airlied at redhat.com>
>>>>> Cc: Chris Wilson <chris at chris-wilson.co.uk>
>>>>> Cc: Christian König <christian.koenig at amd.com>
>>>>>
>>>>> Chris Wilson (2):
>>>>>    dma-buf/dma-fence: Signal all callbacks from dma_fence_release()
>>>>>    dma-buf/dma-fence: Add a mechanism for proxy fences
>>>>>
>>>>> Dave Airlie (1):
>>>>>    drm/syncobj: add sync obj wait interface. (v8)
>>>>>
>>>>> Jason Ekstrand (6):
>>>>>    drm/syncobj: Rename fence_get to find_fence
>>>>>    drm/syncobj: Add a race-free drm_syncobj_fence_get helper
>>>>>    i915: Add support for drm syncobjs
>>>>>    dma-buf/dma-fence: Allow wait_any_timeout without default_wait 
>>>>> (v2)
>>>>>    drm/syncobj: Add a reset ioctl
>>>>>    drm/syncobj: Allow wait for submit and signal behavior (v4)
>>>>>
>>>>>   drivers/dma-buf/Makefile                   |   4 +-
>>>>>   drivers/dma-buf/dma-fence-proxy.c          | 186 
>>>>> +++++++++++++++++++
>>>>>   drivers/dma-buf/dma-fence.c                |  34 ++--
>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c     |   2 +-
>>>>>   drivers/gpu/drm/drm_internal.h             |   4 +
>>>>>   drivers/gpu/drm/drm_ioctl.c                |   4 +
>>>>>   drivers/gpu/drm/drm_syncobj.c              | 275
>>>>> +++++++++++++++++++++++++++--
>>>>>   drivers/gpu/drm/i915/i915_drv.c            |   3 +-
>>>>>   drivers/gpu/drm/i915/i915_gem_execbuffer.c | 146 ++++++++++++++-
>>>>>   include/drm/drm_syncobj.h                  |  15 +-
>>>>>   include/linux/dma-fence-proxy.h            |  25 +++
>>>>>   include/uapi/drm/drm.h                     |  19 ++
>>>>>   include/uapi/drm/i915_drm.h                |  30 +++-
>>>>>   13 files changed, 710 insertions(+), 37 deletions(-)
>>>>>   create mode 100644 drivers/dma-buf/dma-fence-proxy.c
>>>>>   create mode 100644 include/linux/dma-fence-proxy.h
>>>>>
>>>>
>>>
>>>
>>
>
>



More information about the dri-devel mailing list