Question on UAPI for fences
Jerome Glisse
j.glisse at gmail.com
Fri Sep 12 07:50:49 PDT 2014
On Fri, Sep 12, 2014 at 04:43:44PM +0200, Daniel Vetter wrote:
> On Fri, Sep 12, 2014 at 4:09 PM, Daniel Vetter <daniel at ffwll.ch> wrote:
> > On Fri, Sep 12, 2014 at 03:23:22PM +0200, Christian König wrote:
> >> Hello everyone,
> >>
> >> to allow concurrent buffer access by different engines beyond the multiple
> >> readers/single writer model that we currently use in radeon and other
> >> drivers we need some kind of synchonization object exposed to userspace.
> >>
> >> My initial patch set for this used (or rather abused) zero sized GEM buffers
> >> as fence handles. This is obviously isn't the best way of doing this (to
> >> much overhead, rather ugly etc...), Jerome commented on this accordingly.
> >>
> >> So what should a driver expose instead? Android sync points? Something else?
> >
> > I think actually exposing the struct fence objects as a fd, using android
> > syncpts (or at least something compatible to it) is the way to go. Problem
> > is that it's super-hard to get the android guys out of hiding for this :(
> >
> > Adding a bunch of people in the hopes that something sticks.
>
> More people.
Just to re-iterate, exposing such thing while still using command stream
ioctl that use implicit synchronization is a waste and you can only get
the lowest common denominator which is implicit synchronization. So i do
not see the point of such api if you are not also adding a new cs ioctl
with explicit contract that it does not do any kind of synchronization
(it could be almost the exact same code modulo the do not wait for
previous cmd to complete).
Also one thing that the Android sync point does not have, AFAICT, is a
way to schedule synchronization as part of a cs ioctl so cpu never have
to be involve for cmd stream that deal only one gpu (assuming the driver
and hw can do such trick).
Cheers,
Jérôme
> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
More information about the dri-devel
mailing list