Shared semaphores for amdgpu
Dave Airlie
airlied at gmail.com
Thu Mar 9 23:19:16 UTC 2017
> Completely agree, problem here is that this isn't documented like this in
> the Vulkan specification as far as I know.
(I'm adding dri-devel, since I think Intel folks have looked into some
of this already,
and we might need to make some common functionality).
"The semaphore must be signaled, or have an associated semaphore
signal operation that is
pending execution."
So I'll try and summarise the semantics of semaphores vs current fence fds.
For shared semaphores there are two defined semantics: temporary and permanent,
I think we would need to support both. temporary align with fence fd,
but permanent not so much.
The main difference I see if that fence's are a one shot thing, you
create a fence when you submit
it and then you give it to someone else to wait on it.
Semaphores are a create once, share once, use multiple times.
The semantics for permanaent semaphore sharing is:
process A B
allocate
export
import
signal
wait
signal
wait
and so on.
The way we currently to semaphores is to insert a fence into the
semaphore on signal, and
block waiting for that fence on wait, then insert a new one on the
next signal. This means
we don't want to constantly reshared the fence_fd. (The temporary
semaphores sharing semantics
match this behaviour).
This leaves me to believe that fence fd's can't be used for this task
as-is. Now the question is if we can
extend them, and how we do that in a useful and backwards compatible manner.
How would we do this, allow dma_fence to be "updated" from another
dma_fence, so we have some sort
of dma_fence variant that has a permanent lifetime, that we can on
signal update from another fence
to match it's behaviour, then on wait works on the updated info. Do we
just want a wrapper around a fence
then, which is pretty much what the proposed sem code is. or do we
want some way to link a bunch
of fences together? What we don't want is to expose to userspace
anything that requires us to reshare the
fence via the fd again after the initial setup.
Dave.
More information about the dri-devel
mailing list