[VDPAU] [PATCH 4/4] vdpau_wrapper: protect concurrent access to _imp_get_proc_address
Aaron Plattner
aplattner at nvidia.com
Wed Nov 5 10:28:55 PST 2014
On 11/04/2014 10:57 PM, Rémi Denis-Courmont wrote:
> Le 2014-11-05 03:00, Aaron Plattner a écrit :
>> On 10/29/2014 05:47 AM, Rémi Denis-Courmont wrote:
>>> From: Rémi Denis-Courmont <remid at nvidia.com>
>>>
>>> The wrapper, as it's currently written, cannot cope with more than one
>>> VdpGetProcAddress implementation. Luckily, this should hardly ever
>>> happen.
>>
>> Does this ever actually happen?
>
> I suppose it could conceivably happen on a dual-GPU system with both
> NVIDIA and VA-GL drivers. I have obviousky never seen it happen since it
> does not currently work.
>
> Making that work would require significant revectoring of the wrapper.
> It would need to remap handles from the drivers, since different drivers
> can use overlapping ranges of handles. Then it would have to keep track
> of which driver which mapped handles belongs to.
>
>> As you said, there can only ever be
>> one _vdp_imp_device_create_x11_proc at the moment. This is protecting
>> against an implementation that returns different gpa pointers, e.g.
>> for different screens? If so, that seems okay to me.
>
> Well that would be one error case handled by the patch, which was not
> handled well before.
>
> The more general problem is that the current code does not follow the
> memory model. That forbids a thread from writing to memory while another
> thread is reading it.
Fair enough. I pushed this and the other related patches as commit
3162456bb876.
-- Aaron
>> Reviewed-by: Aaron Plattner <aplattner at nvidia.com>
>>
>>> This patch protects access to the _imp_get_proc_address variable to
>>> conform to the memory model, and ensures that a single VDPAU
>>> implementation is used - failing safe if not so.
>>> ---
>>> src/vdpau_wrapper.c | 31 ++++++++++++++++++++++++-------
>>> 1 file changed, 24 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/src/vdpau_wrapper.c b/src/vdpau_wrapper.c
>>> index 7d4885d..196050b 100644
>>> --- a/src/vdpau_wrapper.c
>>> +++ b/src/vdpau_wrapper.c
>>> @@ -526,6 +526,7 @@ VdpStatus vdp_device_create_x11(
>>> {
>>> static pthread_once_t once = PTHREAD_ONCE_INIT;
>>> static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
>>> + VdpGetProcAddress *gpa;
>>> VdpStatus status = VDP_STATUS_OK;
>>>
>>> pthread_once(&once, init_fixes);
>>> @@ -541,17 +542,33 @@ VdpStatus vdp_device_create_x11(
>>> if (status != VDP_STATUS_OK)
>>> return status;
>>>
>>> - status = _vdp_imp_device_create_x11_proc(
>>> - display,
>>> - screen,
>>> - device,
>>> - &_imp_get_proc_address
>>> - );
>>> + status = _vdp_imp_device_create_x11_proc(display, screen,
>>> device, &gpa);
>>> if (status != VDP_STATUS_OK) {
>>> return status;
>>> }
>>>
>>> *get_proc_address = vdp_wrapper_get_proc_address;
>>>
>>> - return VDP_STATUS_OK;
>>> + pthread_mutex_lock(&lock);
>>> + if (_imp_get_proc_address != gpa) {
>>> + if (_imp_get_proc_address == NULL)
>>> + _imp_get_proc_address = gpa;
>>> + else
>>> + /* Currently the wrapper can only deal with one back-end.
>>> + * This should never happen, but better safe than sorry. */
>>> + status = VDP_STATUS_NO_IMPLEMENTATION;
>>> + }
>>> + pthread_mutex_unlock(&lock);
>>> +
>>> + if (status != VDP_STATUS_OK) {
>>> + void *pv;
>>> +
>>> + if (gpa(*device, VDP_FUNC_ID_DEVICE_DESTROY, &pv) ==
>>> VDP_STATUS_OK) {
>>> + VdpDeviceDestroy *device_destroy = pv;
>>> +
>>> + device_destroy(*device);
>>> + }
>>> + }
>>> +
>>> + return status;
>>> }
More information about the VDPAU
mailing list