[Intel-gfx] [PATCH 0/5] dma-fence, i915: Stop allowing SLAB_TYPESAFE_BY_RCU for dma_fence

Jason Ekstrand jason at jlekstrand.net
Thu Jun 10 20:09:47 UTC 2021


On Thu, Jun 10, 2021 at 8:35 AM Jason Ekstrand <jason at jlekstrand.net> wrote:
> On Thu, Jun 10, 2021 at 6:30 AM Daniel Vetter <daniel.vetter at ffwll.ch> wrote:
> > On Thu, Jun 10, 2021 at 11:39 AM Christian König
> > <christian.koenig at amd.com> wrote:
> > > Am 10.06.21 um 11:29 schrieb Tvrtko Ursulin:
> > > > On 09/06/2021 22:29, Jason Ekstrand wrote:
> > > >>
> > > >> We've tried to keep it somewhat contained by doing most of the hard work
> > > >> to prevent access of recycled objects via dma_fence_get_rcu_safe().
> > > >> However, a quick grep of kernel sources says that, of the 30 instances
> > > >> of dma_fence_get_rcu*, only 11 of them use dma_fence_get_rcu_safe().
> > > >> It's likely there bear traps in DRM and related subsystems just waiting
> > > >> for someone to accidentally step in them.
> > > >
> > > > ...because dma_fence_get_rcu_safe apears to be about whether the
> > > > *pointer* to the fence itself is rcu protected, not about the fence
> > > > object itself.
> > >
> > > Yes, exactly that.
>
> The fact that both of you think this either means that I've completely
> missed what's going on with RCUs here (possible but, in this case, I
> think unlikely) or RCUs on dma fences should scare us all.

Taking a step back for a second and ignoring SLAB_TYPESAFE_BY_RCU as
such,  I'd like to ask a slightly different question:  What are the
rules about what is allowed to be done under the RCU read lock and
what guarantees does a driver need to provide?

I think so far that we've all agreed on the following:

 1. Freeing an unsignaled fence is ok as long as it doesn't have any
pending callbacks.  (Callbacks should hold a reference anyway).

 2. The pointer race solved by dma_fence_get_rcu_safe is real and
requires the loop to sort out.

But let's say I have a dma_fence pointer that I got from, say, calling
dma_resv_excl_fence() under rcu_read_lock().  What am I allowed to do
with it under the RCU lock?  What assumptions can I make?  Is this
code, for instance, ok?

rcu_read_lock();
fence = dma_resv_excl_fence(obj);
idle = !fence || test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags);
rcu_read_unlock();

This code very much looks correct under the following assumptions:

 1. A valid fence pointer stays alive under the RCU read lock
 2. SIGNALED_BIT is set-once (it's never unset after being set).

However, if it were, we wouldn't have dma_resv_test_singnaled(), now
would we? :-)

The moment you introduce ANY dma_fence recycling that recycles a
dma_fence within a single RCU grace period, all your assumptions break
down.  SLAB_TYPESAFE_BY_RCU is just one way that i915 does this.  We
also have a little i915_request recycler to try and help with memory
pressure scenarios in certain critical sections that also doesn't
respect RCU grace periods.  And, as mentioned multiple times, our
recycling leaks into every other driver because, thanks to i915's
choice, the above 4-line code snippet isn't valid ANYWHERE in the
kernel.

So the question I'm raising isn't so much about the rules today.
Today, we live in the wild wild west where everything is YOLO.  But
where do we want to go?  Do we like this wild west world?  So we want
more consistency under the RCU read lock?  If so, what do we want the
rules to be?

One option would be to accept the wild-west world we live in and say
"The RCU read lock gains you nothing.  If you want to touch the guts
of a dma_fence, take a reference".  But, at that point, we're eating
two atomics for every time someone wants to look at a dma_fence.  Do
we want that?

Alternatively, and this what I think Daniel and I were trying to
propose here, is that we place some constraints on dma_fence
recycling.  Specifically that, under the RCU read lock, the fence
doesn't suddenly become a new fence.  All of the immutability and
once-mutability guarantees of various bits of dma_fence hold as long
as you have the RCU read lock.

--Jason


More information about the Intel-gfx mailing list