[Nouveau] [PATCH 09/17] drm/radeon: use common fence implementation for fences

Daniel Vetter daniel.vetter at ffwll.ch
Wed Jul 23 02:39:13 PDT 2014


On Wed, Jul 23, 2014 at 11:36 AM, Christian König
<christian.koenig at amd.com> wrote:
> Am 23.07.2014 11:30, schrieb Daniel Vetter:
>
>> On Wed, Jul 23, 2014 at 11:27 AM, Christian König
>> <christian.koenig at amd.com> wrote:
>>>
>>> You submit a job to the hardware and then block the job to wait for
>>> radeon
>>> to be finished? Well than this would indeed require a hardware reset, but
>>> wouldn't that make the whole problem even worse?
>>>
>>> I mean currently we block one userspace process to wait for other
>>> hardware
>>> to be finished with a buffer, but what you are describing here blocks the
>>> whole hardware to wait for other hardware which in the end blocks all
>>> userspace process accessing the hardware.
>>
>> There is nothing new here with prime - if one context hangs the gpu it
>> blocks everyone else on i915.
>>
>>> Talking about alternative approaches wouldn't it be simpler to just
>>> offload
>>> the waiting to a different kernel or userspace thread?
>>
>> Well this is exactly what we'll do once we have the scheduler. But
>> this is an orthogonal issue imo.
>
>
> Mhm, could have the scheduler first?
>
> Cause that sounds like reducing the necessary fence interface to just a
> fence->wait function.

The scheduler needs to keep track of a lot of fences, so I think we'll
have to register callbacks, not a simple wait function. We must keep
track of all the non-i915 fences for all oustanding batches. Also, the
scheduler doesn't eliminate the hw queue, only keep it much slower so
that we can sneak in higher priority things.

Really, scheduler or not is orthogonal.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list