[Intel-gfx] [PATCH] drm/i915: Use rcu instead of stop_machine

Daniel Vetter daniel at ffwll.ch
Fri Oct 6 08:49:12 UTC 2017


On Fri, Oct 6, 2017 at 10:30 AM, Tvrtko Ursulin
<tvrtko.ursulin at linux.intel.com> wrote:
>
> On 05/10/2017 17:24, Daniel Vetter wrote:
>>
>> On Thu, Oct 05, 2017 at 03:55:19PM +0100, Tvrtko Ursulin wrote:
>>>
>>>
>>> On 05/10/2017 15:09, Daniel Vetter wrote:
>>>>
>>>> stop_machine is not really a locking primitive we should use, except
>>>> when the hw folks tell us the hw is broken and that's the only way to
>>>> work around it.
>>>>
>>>> This patch here is just a suggestion for how to fix it up, possible
>>>> changes needed to make it actually work:
>>>>
>>>> - Set the nop_submit_request first for _all_ engines, before
>>>>     proceeding.
>>>>
>>>> - Make sure engine->cancel_requests copes with the possibility that
>>>>     not all tests have consistently used the new or old version. I dont
>>>>     think this is a problem, since the same can happen really with the
>>>>     stop_machine() locking - stop_machine also doesn't give you any kind
>>>>     of global ordering against other cpu threads, it just makes them
>>>>     stop.
>>>>
>>>> This patch tries to address the locking snafu from
>>>>
>>>> commit 20e4933c478a1ca694b38fa4ac44d99e659941f5
>>>> Author: Chris Wilson <chris at chris-wilson.co.uk>
>>>> Date:   Tue Nov 22 14:41:21 2016 +0000
>>>>
>>>>       drm/i915: Stop the machine as we install the wedged submit_request
>>>> handler
>>>>
>>>> Chris said parts of the reasons for going with stop_machine() was that
>>>> it's no overhead for the fast-path. But these callbacks use irqsave
>>>> spinlocks and do a bunch of MMIO, and rcu_read_lock is _real_ fast.
>>>>
>>>> Cc: Chris Wilson <chris at chris-wilson.co.uk>
>>>> Cc: Mika Kuoppala <mika.kuoppala at intel.com>
>>>> Signed-off-by: Daniel Vetter <daniel.vetter at intel.com>
>>>> ---
>>>>    drivers/gpu/drm/i915/i915_gem.c                   | 18
>>>> +++++-------------
>>>>    drivers/gpu/drm/i915/i915_gem_request.c           |  2 ++
>>>>    drivers/gpu/drm/i915/selftests/i915_gem_request.c |  2 ++
>>>>    3 files changed, 9 insertions(+), 13 deletions(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c
>>>> b/drivers/gpu/drm/i915/i915_gem.c
>>>> index ab8c6946fea4..0b260e576b4b 100644
>>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>>> @@ -3022,13 +3022,13 @@ static void nop_submit_request(struct
>>>> drm_i915_gem_request *request)
>>>>    static void engine_set_wedged(struct intel_engine_cs *engine)
>>>>    {
>>>> +       engine->submit_request = nop_submit_request;
>>>
>>>
>>> Should this be rcu_assign_pointer?
>>
>>
>> Those provide additional barriers, needed when you change/allocate the
>> stuff you're pointing to. We point to immutable functions, so shouldn't be
>> necessary (and would be confusing imo).
>
>
> Ah ok. Any barriers then? Or synchronize_rcu implies them all?

Yup, at least for simple load/store.

rcu_derefence/rcu_assign_pointer need additional barriers because
compilers (and some cpus like alpha) might first load the pointer,
then load something through that pointer, then re-load the pointer,
and that could mean you see the memory pointed at in a state before
rcu_assign_pointer has been called.

On x86 it's just READ_ONCE/WRITE_ONCE, but alpha needs actual hw
barriers on top for this.

>>>> +
>>>>         /* We need to be sure that no thread is running the old callback
>>>> as
>>>>          * we install the nop handler (otherwise we would submit a
>>>> request
>>>> -        * to hardware that will never complete). In order to prevent
>>>> this
>>>> -        * race, we wait until the machine is idle before making the
>>>> swap
>>>> -        * (using stop_machine()).
>>>> +        * to hardware that will never complete).
>>>>          */
>>>> -       engine->submit_request = nop_submit_request;
>>>> +       synchronize_rcu();
>>>
>>>
>>> Consumers of this are running in irq disabled or softirq. Does this mean
>>> we
>>> would need synchronize_rcu_bh? Would either guarantee all tasklets and
>>> irq
>>> handlers have exited?
>>
>>
>> Oh ... tbh I didn't even digg that deep (much less ran this stuff). This
>> really is an RFC so people with real clue could say whether it has a
>> chance of working or not.
>>
>> Looking at rcu docs we don't want _bh variants, since rcu_read_lock should
>> be safe in even hardirq context. _bh and _sched otoh require that all
>> critical sections are either in bottom halfs or hardirq context, since
>> they treat scheduling of those as a grace period.
>
>
> rcu_read_unlock might schedule (via preempt_enable) so I don't think we can
> use them from the fence callbacks.

Only when it's the outermost preeempt_enable, and hard/softirq are
special kinds of preempt_disable/enable. See the implementation.

> And _bh is indeed only for softirq while we need hard and soft. So I am not
> sure which one we could use.

normal/_bh/_sched isn't about where you have your read side critical
section, but what other stuff can run in the read side critical
section, and hence what counts as a quiescent event:

normal -> no preempt -> task switch is a rcu quiescent state
_bh -> no (other) softirq -> sofirq completion is a quiescent state
(plus anything where softirq aren't disabled, which can be used to
expedite the grace period)
_sched: like _bh, but for hardirq.

And if you need scheduling within your critical section, then use
srcu. So different rcu variants trade off overhead and speed of the
grace period against what you can get preempted with in the read side
critical sections. They don't make restrictions on where your read
side critical section can be run. E.g. you could do an srcu read side
critical section in a hardirq handler - of course sleeping isn't
allowed anymore, but that's because you run in the hardirq handler,
not because you're in the read side srcu section.

> It sounds to me any would be wrong and if we wanted to drop stop_machine we
> would simply have to use nothing. But then we couldn't be certain there are
> no more new requests queued after wedged has been set.

Just dropping it entirely is definitely not good enough, we've watched
that approach go boom already. We do need some ordering to make sure
we don't start cleaning up while someone else is still executing code
from the old callbacks.

> Maybe I am missing something, not sure.

rcu is hard :-)
-Daniel

>  Regards,
>
> Tvrtko
>
>
>> Cheers, Daniel
>>
>>>>         /* Mark all executing requests as skipped */
>>>>         engine->cancel_requests(engine);
>>>> @@ -3041,9 +3041,8 @@ static void engine_set_wedged(struct
>>>> intel_engine_cs *engine)
>>>>
>>>> intel_engine_last_submit(engine));
>>>>    }
>>>> -static int __i915_gem_set_wedged_BKL(void *data)
>>>> +void i915_gem_set_wedged(struct drm_i915_private *i915)
>>>>    {
>>>> -       struct drm_i915_private *i915 = data;
>>>>         struct intel_engine_cs *engine;
>>>>         enum intel_engine_id id;
>>>> @@ -3052,13 +3051,6 @@ static int __i915_gem_set_wedged_BKL(void *data)
>>>>         set_bit(I915_WEDGED, &i915->gpu_error.flags);
>>>>         wake_up_all(&i915->gpu_error.reset_queue);
>>>> -
>>>> -       return 0;
>>>> -}
>>>> -
>>>> -void i915_gem_set_wedged(struct drm_i915_private *dev_priv)
>>>> -{
>>>> -       stop_machine(__i915_gem_set_wedged_BKL, dev_priv, NULL);
>>>>    }
>>>>    bool i915_gem_unset_wedged(struct drm_i915_private *i915)
>>>> diff --git a/drivers/gpu/drm/i915/i915_gem_request.c
>>>> b/drivers/gpu/drm/i915/i915_gem_request.c
>>>> index b100b38f1dd2..ef78a85cb845 100644
>>>> --- a/drivers/gpu/drm/i915/i915_gem_request.c
>>>> +++ b/drivers/gpu/drm/i915/i915_gem_request.c
>>>> @@ -556,7 +556,9 @@ submit_notify(struct i915_sw_fence *fence, enum
>>>> i915_sw_fence_notify state)
>>>>         switch (state) {
>>>>         case FENCE_COMPLETE:
>>>>                 trace_i915_gem_request_submit(request);
>>>> +               rcu_read_lock();
>>>>                 request->engine->submit_request(request);
>>>> +               rcu_read_unlock();
>>>
>>>
>>> And _bh for these? Although this already runs with preemption off, but I
>>> guess it is good for documentation.
>>>
>>> Regards,
>>>
>>> Tvrtko
>>>
>>>>                 break;
>>>>         case FENCE_FREE:
>>>> diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
>>>> b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
>>>> index 78b9f811707f..a999161e8db1 100644
>>>> --- a/drivers/gpu/drm/i915/selftests/i915_gem_request.c
>>>> +++ b/drivers/gpu/drm/i915/selftests/i915_gem_request.c
>>>> @@ -215,7 +215,9 @@ static int igt_request_rewind(void *arg)
>>>>         }
>>>>         i915_gem_request_get(vip);
>>>>         i915_add_request(vip);
>>>> +       rcu_read_lock();
>>>>         request->engine->submit_request(request);
>>>> +       rcu_read_unlock();
>>>>         mutex_unlock(&i915->drm.struct_mutex);
>>>>
>>
>



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the Intel-gfx mailing list