[Intel-gfx] [RFC v2] drm/i915: Android native sync support
Daniel Vetter
daniel at ffwll.ch
Fri Jan 23 07:53:48 PST 2015
On Fri, Jan 23, 2015 at 02:02:44PM +0000, Tvrtko Ursulin wrote:
>
> On 01/23/2015 11:27 AM, Chris Wilson wrote:
> >On Fri, Jan 23, 2015 at 11:13:14AM +0000, Tvrtko Ursulin wrote:
> >>From: Jesse Barnes <jbarnes at virtuousgeek.org>
> >>
> >>Add Android native sync support with fences exported as file descriptors via
> >>the execbuf ioctl (rsvd2 field is used).
> >>
> >>This is a continuation of Jesse Barnes's previous work, squashed to arrive at
> >>the final destination, cleaned up, with some fixes and preliminary light
> >>testing.
> >>
> >>GEM requests are extended with fence structures which are associated with
> >>Android sync fences exported to user space via file descriptors. Fences which
> >>are waited upon, and while exported to userspace, are referenced and added to
> >>the irq_queue so they are signalled when requests are completed. There is no
> >>overhead apart from the where fences are not requested.
> >>
> >>Based on patches by Jesse Barnes:
> >> drm/i915: Android sync points for i915 v3
> >> drm/i915: add fences to the request struct
> >> drm/i915: sync fence fixes/updates
> >>
> >>To do:
> >> * Extend driver data with context id / ring id (TBD).
> >>
> >>v2:
> >> * Code review comments. (Chris Wilson)
> >> * ring->add_request() was a wrong thing to call - rebase on top of John
> >> Harrison's (drm/i915: Early alloc request) to ensure correct request is
> >> present before creating a fence.
> >> * Take a request reference from signalling path as well to ensure request
> >> sticks around while fence is on the request completion wait queue.
> >
> >Ok, in this arrangement, attaching a fence to the execbuf is rather meh
> >as it is just a special cased version of attaching a fence to a bo.
>
> Better meh than "no"! :D
>
> My understanding is this is what people want, with the future input fence
> extension (scheduler).
>
> Anyway.. v2 is broken since it unreferences requests without holding the
> mutex, more so from irq context so I need to rework that a bit.
Yeah that's kind the big behaviour difference (at least as I see it)
between explicit sync and implicit sync:
- with implicit sync the kernel attachs sync points/requests to buffers
and userspace just asks about idle/business of buffers. Synchronization
between different users is all handled behind userspace's back in the
kernel.
- explicit sync attaches sync points to individual bits of work and makes
them explicit objects userspace can get at and pass around. Userspace
uses these separate things to inquire about when something is
done/idle/busy and has its own mapping between explicit sync objects and
the different pieces of memory affected by each. Synchronization between
different clients is handled explicitly by passing sync objects around
each time some rendering is done.
The bigger driver for explicit sync (besides "nvidia likes it sooooo much
that everyone uses it a lot") seems to be a) shitty gpu drivers without
proper bo managers (*cough*android*cough*) and svm, where there's simply
no buffer objects any more to attach sync information to.
So attaching explicit sync objects to buffers doesn't make that much
sense by default. Except when we want to mix explicit and implicit
userspace, then we need to grab sync objects out of dma-bufs and attach
random sync objects to them at the border. Iirc Maarten had dma-buf
patches for exactly this.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
More information about the Intel-gfx
mailing list