[Bug 98172] Concurrent call to glClientWaitSync results in segfault in one of the waiters.
bugzilla-daemon at freedesktop.org
bugzilla-daemon at freedesktop.org
Sun Oct 9 17:04:19 UTC 2016
https://bugs.freedesktop.org/show_bug.cgi?id=98172
Bug ID: 98172
Summary: Concurrent call to glClientWaitSync results in
segfault in one of the waiters.
Product: Mesa
Version: 11.2
Hardware: Other
OS: All
Status: NEW
Severity: normal
Priority: medium
Component: Drivers/Gallium/r600
Assignee: dri-devel at lists.freedesktop.org
Reporter: shinji.suzuki at gmail.com
QA Contact: dri-devel at lists.freedesktop.org
In my app, a fence is created in Thread-A and it gets passed to Thread-B and
Thread-C to be waited upon. (Each thread has its own context.)
Thread-A issues the call, fence = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE,
0).
Thread-B and C issue the call, glClientWaitSync(fence,
GL_SYNC_FLUSH_COMMANDS_BIT, GL_TIMEOUT_IGNORED);
Most of the time, wait in both clients succeed but occasionally one of them
generates segfault. Upon inspection of generated core, it turned out so->fence
in the expression "&so->fence" at line 113 in
src/mesa/state_tracker/st_cb_syncobj.c is NULL, which should be causing the
segfault down the call chain through screen->fence_reference. I think there is
race in executing the code block. If I introduce a mutex in my app with which
to avoid concurrent call to glClientWaitSync, I don't observe segfault
happening.
Here is the snippet of code in question from st_cb_syncobj.c:
if (so->fence &&
screen->fence_finish(screen, so->fence, timeout)) {
screen->fence_reference(screen, &so->fence, NULL);
so->b.StatusFlag = GL_TRUE;
}
My environment is;
Ubuntu 16.04 LTS
Linux a7da 4.4.0-38-generic #57-Ubuntu SMP Tue Sep 6 15:42:33 UTC 2016 x86_64
x86_64 x86_64 GNU/Linux
libgl1-mesa-dri:amd64 / 11.2.0-1ubuntu2.2
Radeon HD3300
--
You are receiving this mail because:
You are the assignee for the bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20161009/410e7b5e/attachment.html>
More information about the dri-devel
mailing list