[PATCH xserver 1/2] modesetting: Fix reverse prime partial update issues on secondary GPU outputs
Hans de Goede
hdegoede at redhat.com
Fri Sep 16 07:18:12 UTC 2016
Hi,
On 16-09-16 04:00, Michel Dänzer wrote:
> On 16/09/16 06:50 AM, Eric Anholt wrote:
>> Hans de Goede <hdegoede at redhat.com> writes:
>>
>>> When using reverse prime we do 2 copies, 1 from the primary GPU's
>>> framebuffer to a shared pixmap and 1 from the shared pixmap to the
>>> secondary GPU's framebuffer.
>>>
>>> This means that on the primary GPU side the copy MUST be finished,
>>> before we start the second copy (before the secondary GPU's driver
>>> starts processing the damage on the shared pixmap).
>>>
>>> This fixes secondary outputs sometimes showning (some) old fb contents,
>>> because of the 2 copies racing with each other, for an example of
>>> what this looks like see:
>>
>> Is working around the fact that the primary and secondary aren't
>> cooperating on dmabuf fencing? Should they be doing that instead?
>>
>> Or would glamor_flush be sufficient?
>
> Yes, glamor_flush is sufficient if the kernel drivers handle fences
> correctly.
I will admit that I'm not familiar with all the intrinsics involved here,
but I do not see how glamor_flush would be sufficient.
We must guarantee that the first copy is complete before the second
copy is started. I think that with taking fencing into account this
turns into must make sure the first copy has started, because once
started then the gpu doing the first copy owns the buffer until
it is completed.
But AFAIK flush does not guarantee that the copy has started, only
that it will start real soon now.
Also if you look at the screenshot I posted:
https://fedorapeople.org/~jwrdegoede/IMG_20160915_130555.jpg
Then you can clearly see that the 2 copies are racing, so it seems
that fencing is not working as it should here.
So all in all I believe that it is best to stick with finish() here,
yes I realize that finish is frowned upon, do note that this only
comes into play when secondary gpu outputs are used.
Regards,
Hans
More information about the xorg-devel
mailing list