[Mesa-dev] [PATCH 2/2] st/va: flush pipeline after post processing

Christian König deathsimple at vodafone.de
Tue Nov 29 14:36:26 UTC 2016


Am 29.11.2016 um 15:28 schrieb Nicolai Hähnle:
> On 29.11.2016 15:12, Christian König wrote:
>> Am 29.11.2016 um 15:06 schrieb Nicolai Hähnle:
>>> On 29.11.2016 14:50, Christian König wrote:
>>>> Am 29.11.2016 um 14:46 schrieb Nicolai Hähnle:
>>>>> On 28.11.2016 15:51, Christian König wrote:
>>>>>> From: sguttula <suresh.guttula at amd.com>
>>>>>>
>>>>>> This will flush the pipeline,which will allow to share dma-buf based
>>>>>> buffers.
>>>>>>
>>>>>> Signed-off-by: Suresh Guttula <Suresh.Guttula at amd.com>
>>>>>> Reviewed-by: Christian König <christian.koenig at amd.com>
>>>>>
>>>>> Why is there no fence? Relying on the correctness of doing a flush
>>>>> without a fence seems very likely to be wrong... it might seemingly
>>>>> fix a sharing issue, but once the timing changes, the other side of a
>>>>> buffer sharing might still see wrong results if it isn't properly
>>>>> synchronized.
>>>>
>>>> Well there is no facility to share a fence with another side, so the
>>>> kernel must make sure that the correct order is kept when the 
>>>> DMA-buf is
>>>> used by multiple processes that everything executes in the right 
>>>> order.
>>>
>>> Ah right, the kernel does most of the job. Still, because of
>>> multi-threaded submit, returning from pipe->flush doesn't actually
>>> guarantee that the work has even been submitted by the kernel.
>>>
>>> So unless the only guarantee you want here is that progress happens
>>> eventually, you'll still need a fence.
>>>
>>> I don't think we have an interface that guarantees that work has
>>> reached the kernel without also waiting for job completion.
>>
>> I'm pretty sure that this isn't correct, otherwise VDPAU interop won't
>> work either.
>
> Maybe we're just getting lucky?
>
>
>> When pipe->flush() is called it must be guaranteed that all work is
>> submitted to the kernel.
>
> It guarantees that the work is (eventually, in practice very soon) 
> submitted to the kernel, but it does _not_ guarantee that the kernel 
> has already returned from the CS ioctl.
>
> The whole point of multi-threaded dispatch is to avoid the wait that 
> would be required to guarantee that.

We neither need nor want multi-threaded dispatch for the whole MM parts.

When I've implemented the whole multi ring dispatch logic this was 
clearly the case and async flushes where only initialized by the winsys 
if the RADEON_FLUSH_ASYNC flag was given.

And as far as I understand it on the pipe layer the driver can only use 
a async flush if the PIPE_FLUSH_DEFERRED flag is given.

If that isn't the case here then that's clearly a bug which needs to be 
fixed.

Regards,
Christian.

>
> Cheers,
> Nicolai
>
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Cheers,
>>> Nicolai
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>>
>>>>> Cheers,
>>>>> Nicolai
>>>>>
>>>>>> ---
>>>>>>  src/gallium/state_trackers/va/postproc.c | 1 +
>>>>>>  1 file changed, 1 insertion(+)
>>>>>>
>>>>>> diff --git a/src/gallium/state_trackers/va/postproc.c
>>>>>> b/src/gallium/state_trackers/va/postproc.c
>>>>>> index d06f016..01e240f 100644
>>>>>> --- a/src/gallium/state_trackers/va/postproc.c
>>>>>> +++ b/src/gallium/state_trackers/va/postproc.c
>>>>>> @@ -80,6 +80,7 @@ vlVaPostProcCompositor(vlVaDriver *drv, 
>>>>>> vlVaContext
>>>>>> *context,
>>>>>> vl_compositor_set_layer_dst_area(&drv->cstate, 0, &dst_rect);
>>>>>>     vl_compositor_render(&drv->cstate, &drv->compositor, 
>>>>>> surfaces[0],
>>>>>> NULL, false);
>>>>>>
>>>>>> +   drv->pipe->flush(drv->pipe, NULL, 0);
>>>>>>     return VA_STATUS_SUCCESS;
>>>>>>  }
>>>>>>
>>>>>>
>>>>
>>



More information about the mesa-dev mailing list