drm_gem_get_pages and proper flushing/coherency

Oleksandr Andrushchenko andr2000 at gmail.com
Mon Dec 3 09:43:38 UTC 2018


On 11/26/18 2:15 PM, Oleksandr Andrushchenko wrote:
> Hello, all!
>
> My driver (Xen para-virtualized frontend) in some scenarios uses
> drm_gem_get_pages for allocating backing storage for dumb buffers.
> There are use-cases which showed some artifacts on the screen
> (modetest, other) which were worked around by flushing pages of the
> buffer on page flip with drm_clflush_pages. But, the problem here
> is that drm_clflush_pages is not available on ARM platforms (it is a NOP)
> and doing flushes on every page flip seems to be non-optimal.
>
> Other drivers that use drm_gem_get_pages seem to use DMA map/unmap
> on the shmem backed buffer (this is from where drm_gem_get_pages
> allocates the pages) and this is an obvious approach as the buffer needs
> to be shared with real HW for DMA - please correct me if my understanding
> here is wrong.

I have created a patch which implements DMA mapping [1] and this

does solve artifacts problem for me.

Is this the right way to go?

>
> This is the part I missed in my implementation as I don't really have a
> HW device which needs DMA, but a backend running in a different Xen 
> domain.
>
> Thus, as the buffer is backed with cachable pages the backend may see
>
> artifacts on its side.
>
>
> I am looking for some advices on what would be the best option to
> make sure dumb buffers are not flushed every page flip and still
> the memory remains coherent to the backend. I have implemented a
> DMA map/unmap of the shmem pages on GEM object creation/destruction
> and this does solve the problem, but as the backend is not really
> a DMA device this is a bit misleading.
>
> Is there any other (more?) suitable/preferable way(s) to achieve the 
> same?
>
> Thank you,
> Oleksandr
>
Thank you,

Oleksandr

[1] https://patchwork.freedesktop.org/series/53069/



More information about the dri-devel mailing list