<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Jul 12, 2017 at 1:39 AM, Dave Airlie <span dir="ltr"><<a href="mailto:airlied@gmail.com" target="_blank">airlied@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On 12 July 2017 at 17:39, Christian König <<a href="mailto:deathsimple@vodafone.de">deathsimple@vodafone.de</a>> wrote:<br>
> Am 11.07.2017 um 17:43 schrieb Jason Ekstrand:<br>
><br>
> On Tue, Jul 11, 2017 at 12:17 AM, Christian König <<a href="mailto:deathsimple@vodafone.de">deathsimple@vodafone.de</a>><br>
> wrote:<br>
>><br>
>> [SNIP]<br>
>>>>><br>
>>>>> If we ever want to share fences across processes (which we do),<br>
>>>>> then this needs to be sorted in the kernel.<br>
>>>><br>
>>>> That would clearly get a NAK from my side, even Microsoft forbids<br>
>>>> wait before signal because you can easily end up in deadlock<br>
>>>> situations.<br>
>>>><br>
>>>> Please don't NAK things that are required by the API specification and<br>
>>>> CTS tests.<br>
>>><br>
>>> There is no requirement for every aspect of the Vulkan API specification<br>
>>> to be mirrored 1:1 in the kernel <-> userspace API. We have to work out<br>
>>> what makes sense at each level.<br>
>><br>
>><br>
>> Exactly, if we have a synchronization problem between two processes that<br>
>> should be solved in userspace.<br>
>><br>
>> E.g. if process A hasn't submitted it's work to the kernel it should flush<br>
>> it's commands before sending a flip event to the compositor.<br>
><br>
><br>
> Ok, I think there is some confusion here on what is being proposed. Here<br>
> are some things that are *not* being proposed:<br>
><br>
> 1. This does *not* allow a client to block another client's GPU work<br>
> indefinitely. This is entirely for a CPU wait API to allow for a "wait for<br>
> submit" as well as a "wait for finish".<br>
><br>
> Yeah, that is a rather good point.<br>
><br>
> 2. This is *not* for system compositors that need to be robust against<br>
> malicious clients.<br>
><br>
> I can see the point, but I think the kernel interface should still be idiot<br>
> prove even in the unlikely case the universe suddenly stops to produce<br>
> idiots.<br></div></div></blockquote><div><br></div><div>Fair enough. Maybe I've spent too much time in the Vulkan world where being an idiot is, theoretically, disallowed. And, by "disallowed", I mean that you're free to be one with the understanding that your process may get straight-up killed on *any* API violation.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
> The expected use for the OPAQUE_FD is two very tightly integrated processes<br>
> which trust each other but need to be able to share synchronization<br>
> primitives.<br>
><br>
> Well, that raises a really important question: What is the actual use case<br>
> for this if not communication between client and compositor?<br>
<br>
</div></div>VR clients and compositors.<span class=""><br></span></blockquote><div><br></div><div>Yeah, that. Wouldn't they want the same security guarantees as your OS compositor? One would think, but none of the people making VR compositors seem to care all that much about the case of malicious clients. In general, they run a fairly closed platform where they aren't really running "arbitrary" apps on their compositor. Also, they tend to write both the client and server sides of the VR compositor protocol and the only thing the app touches is their API. I'm not going to try too hard to justify their lack of concern about deadlock, but there it is.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
> Could we do this "in userspace"? Yes, with added kernel API. we would need<br>
> some way of strapping a second FD onto a syncobj or combining two FDs into<br>
> one to send across the wire or something like that, then add a shared memory<br>
> segment, and then pile on a bunch of code to do cross-process condition<br>
> variables and state tracking. I really don't see how that's a better<br>
> solution than adding a flag to the kernel API to just do what we want.<br>
><br>
> Way to complicated.<br>
><br>
> My thinking was rather to optionally allow a single page to be mmap()ed into<br>
> the process address space from the fd and then put a futex, pthread_cond or<br>
> X shared memory fence or anything like that into it.<br></span></blockquote><div><br></div><div>A single page + fence sounds a lot like a DRM BO. One of my original plans for implementing the feature was to just use a single-page BO and do some userspace stuff in the mapped page. There are two problems here:<br><br>1) It could be very easy for a malicious client to map the page and then mess up whatever CPU data structure I use for the semaphore. I could probably make it robust but there is an attack vector there that's going to be tricky.<br>2) I have no way, on import, to tell the difference between a 4K memory object and a fence.<br><br>Then syncobj came along and promised to solve all my problems...<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
</span>Is that easier than just waiting in the kernel, I'm not sure how<br>
optimised we need this path to be.<span class="HOEnZb"><font color="#888888"><br>
</font></span></blockquote></div><br></div><div class="gmail_extra">I don't think so. I think it's more-or-less the same code regardless of how it's done. The advantage of doing it in the kernel is that it's standardized (we don't have to both go write that userspace code) and it doesn't have the problems stated above.<br></div></div>