[Bug 71759] Intel driver fails with "intel_do_flush_locked failed: No such file or directory" if buffer imported with EGL_NATIVE_PIXMAP_KHR

bugzilla-daemon at freedesktop.org bugzilla-daemon at freedesktop.org
Wed Jul 13 17:19:54 UTC 2016


https://bugs.freedesktop.org/show_bug.cgi?id=71759

--- Comment #21 from Martin Peres <martin.peres at free.fr> ---
So, I finally took the time to go through the stack today and inspect
everything. Thanks to Fabrice for your analysis and initial patch!

So, the goal of my analysis was to check who was opening the different fds, who
created different contexts and where buffers would be. Turns out that while
mesa gets its fd most of the time from the XServer (except when using PRIME),
vaapi opens its own fd (dri2_util.c:198) based on the device name returned by
DRI2 (yes, VAAPI does not support DRI3, which is cause for concern for users of
the modesetting driver).

Cogl and vaapi then create a ton of contexts (one per texture :o). When comes
the time to import in cogl the frame rendered by vaapi, prime is being used by
cogl because it got a DRI3 context. The context that creates the texture is the
one created for the texture (which has its own bufmgr, because the FDs did not
match) but when the import is done by mesa, it is done by the screen's bufmgr
... which is of course not the same one as the one from the texture's context.

Now, here are the million dollars questions:
 - Why is intel_create_image_from_fds is using the screen's bufmgr instead of
the current context's?
 - If there is no way around this issue, is this why there is code in libdrm to
give away the same bufmgr when the fd is the same? 

If the answer to the second question is YES, I can see why it would work when
dealing with mesa only (since the fd is received from the X server). However,
this is non-satisfactory for libva which does not use mesa's code for dri2 but
instead opens its own fd. Since we have to make sure that GL textures are
shared, we need to make sure that the same bufmgr is given for all the contexts
for the same GPU. Fabrice's solution is in this regard not complete because it
assumes there is only one node exposed per GPU ... which is not true since the
render nodes got introduced. On the modesetting driver, card0 is always picked
(for DRI2 and DRI3) which means that Fabrice's solution would work, but only on
modesetting. On xf86-video-intel, renderD128 is returned for DRI3 instead of
card0, so the inode would differ. I will try to fix this tomorrow by using the
new functions in libdrm to find the node type we want. In any case, this will
have a severe performance impact on context creation time, so I will be sure to
actually benchmark this!

-- 
You are receiving this mail because:
You are the QA Contact for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/intel-3d-bugs/attachments/20160713/eab507e0/attachment.html>


More information about the intel-3d-bugs mailing list