GEM allocation for para-virtualized DRM driver

Rob Clark robdclark at gmail.com
Sat Mar 18 14:06:38 UTC 2017


On Sat, Mar 18, 2017 at 9:25 AM, Oleksandr Andrushchenko
<andr2000 at gmail.com> wrote:
> Hi, Rob
>
> On 03/18/2017 02:22 PM, Rob Clark wrote:
>>
>> On Fri, Mar 17, 2017 at 1:39 PM, Oleksandr Andrushchenko
>> <andr2000 at gmail.com> wrote:
>>>
>>> Hello,
>>> I am writing a para-virtualized DRM driver for Xen hypervisor
>>> and it now works with DRM CMA helpers, but I would also like
>>> to make it work with non-contigous memory: virtual machine
>>> that the driver runs in can't guarantee that CMA is actually
>>> physically contigous (that is not a problem because of IPMMU
>>> and other means, the only constraint I have is that I cannot mmap
>>> with pgprot == noncached). So, I am planning to use *drm_gem_get_pages* +
>>> *shmem_read_mapping_page_gfp* to allocate memory for GEM objects
>>> (scanout buffers + dma-bufs shared with virtual GPU)
>>>
>>> Do you think this is the right approach to take?
>>
>> I guess if you had some case where you needed to "migrate" buffers
>> between host and guest memory,
>
> yes, this is the case. but, I can "map" buffers between host and guests

if you need to physically copy (transfer), like a discreet gpu with
vram, then TTM makes sense.  If you can map the pages directly into
the guest then TTM is probably overkill.

>>   then TTM might be useful.
>
> I was looking into it, but it seems to be an overkill in my case
> And isn't it that GEM should be used for new drivers, not TTM?

Not really, it's just that (other than amdgpu which uses TTM) all of
the newer drivers have been unified memory.  A driver for a new GPU
that had vram of some sort should still use TTM.

BR,
-R

>>
>>    Otherwise
>> this sounds like the right approach.
>
> Thank you. Actually, I am playing with alloc_pages + remap_pfn_range now,
> but what DRM provides (_get_pages + shmem_read) seem to be more portable
> and generic. So, I'll probably stick to it
>>
>> BR,
>> -R
>
> Thank you for helping,
> Oleksandr Andrushchenko


More information about the dri-devel mailing list