[PATCH 1/4] drm: add prime helpers

Daniel Vetter daniel at ffwll.ch
Fri Dec 7 13:38:08 PST 2012


On Fri, Dec 7, 2012 at 9:33 PM, Aaron Plattner <aplattner at nvidia.com> wrote:
> Ah, I see what you mean.  This would also need either a pointer to the drv
> directly so the helpers can lock drv->struct_mutex, or a helper function to
> convert from a drm_prime_helper_obj* to a gem_buffer_object* in a
> driver-specific way since the helper doesn't know the layout of the driver's
> bo structure.

Ah, locking, and it involves dev->struct_mutex. Another pet peeve of
mine ;-) Tbh I haven't looked at locking issues when stitching
together my quick proposal.

The problem with the dev->struct_mutex lock is that it's everywhere
and hence it's way to easy to create deadlocks with it. Especially
with prime, where a simple rule like "take this lock first, always"
are suddenly really more complicated since gem drivers can assume both
the importer and exporter role ... I haven't done a full locking
review, but I expect current prime to deadlock (through driver calls)
when userspace starts doing funny things.

So imo it's best to move dev->struct_mutex as far away as possible
from helper functions and think really hard whether we really need it.
I've noticed three functions:

drm_gem_map_dma_buf: We can remove the locking here imo, if drivers
need dev->struct_mutex for internal consistency, they can take it in
their callbacks. The dma_buf_attachment is very much like a fancy sg
list + backing storage pointer. If multiple callers concurrently
interact with the same attachment, havoc might ensue. Importers should
add their own locking if concurrent access is possible (e.g. similar
to how filling in the same sg table concurrently and then calling
dma_map_sg concurrently would lead to chaos). Otoh creating/destroying
attachments with attach/detach is protected by the dma_buf->lock.

I've checked dma-buf docs, and that this is not specced in the docs,
but was very much the intention. So I think we can solve this with a
simple doc update ;-)

drm_gem_dmabuf_v(un)map: Beside that drivers might need to lock for
internal consistency the lock only protects the vmapping_counter and
pointer and avoids that we race concurrent vmap/vunmap calls to the
driver. Now vmap/vunmap is very much like an attachment, just that we
don't have an explicit struct for each client since there's only one
cpu address space.

So pretty much every driver is forced to reinvent vmap refcounting,
which feels like an interface bug. What about moving this code to the
dma-buf.c interface shim and protecting it with dma_buf->lock? Atm we
only take that lock for attach/detach, and drivers using vmap place
the vmap call at the same spot hw drivers would place the attach call,
so this shouldn't introduce any nefarious locking issues. So I think
this would be the right way to move forward.

And with that we've weaned the prime helpers off their dependency of
dev->struct_mutex, which should make them a notch more flexible and so
useful (since fighting locking inversions is a royal pain best
avoided).

Comments?

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list