[Mesa-dev] [PATCH] radv: force compute flush at end of command stream.

Dave Airlie airlied at gmail.com
Thu Jul 27 20:46:13 UTC 2017


On 27 July 2017 at 07:37, Dave Airlie <airlied at gmail.com> wrote:
> On 27 July 2017 at 00:06, Nicolai Hähnle <nhaehnle at gmail.com> wrote:
>> On 26.07.2017 05:42, Dave Airlie wrote:
>>>
>>> From: Dave Airlie <airlied at redhat.com>
>>>
>>> This seems like a workaround, but we don't see the bug on CIK/VI.
>>>
>>> On SI with the dEQP-VK.memory.pipeline_barrier.host_read_transfer_dst.*
>>> tests, when one tests complete, the first flush at the start of the next
>>> test causes a VM fault as we've destroyed the VM, but we end up flushing
>>> the compute shader then, and it must still be in the process of doing
>>> something.
>>>
>>> Could also be a kernel difference between SI and CIK.
>>
>>
>> What do you mean by "destroyed the VM"? I thought the Vulkan CTS runs in a
>> single process?
>
> It can, but I run it inside piglit. But even just running one test
> twice in a row causes the
> problem.
>
>>
>> I guess it's fine as a temporary workaround, but I highly suspect we have
>> some SI-specific bug related to these flushes; I've seen issues with
>> radeonsi on amdgpu as well. It would be great to understand them properly.
>>
>> What do the VM faults look like? How reproducible is this?
>
> Writes to an address that is no longer valid, the address was valid in
> the last compute
> shader execution in the previous process.
>
> Yes just get an SI, build radv, run
> ./deqp-vk --deqp-case=dEQP-VK.memory.pipeline_barrier.host_write_uniform_texel_buffer.1024
> run it again, viola faults.

I should also mention I've previously seen traces using pro always do
partial cs/ps flushes at end
of every command buffer (on all GPUs). So maybe this is where that
comes from there.

Dave.


More information about the mesa-dev mailing list