drm: Why shmem?

Noralf Trønnes noralf at tronnes.org
Fri Sep 15 14:38:03 UTC 2017


Den 15.09.2017 02.45, skrev Eric Anholt:
> Noralf Trønnes <noralf at tronnes.org> writes:
>
>> Den 30.08.2017 09.40, skrev Daniel Vetter:
>>> On Tue, Aug 29, 2017 at 10:40:04AM -0700, Eric Anholt wrote:
>>>> Daniel Vetter <daniel at ffwll.ch> writes:
>>>>
>>>>> On Mon, Aug 28, 2017 at 8:44 PM, Noralf Trønnes <noralf at tronnes.org> wrote:
>>>>>> Hi,
>>>>>>
>>>>>> Currently I'm using the cma library with tinydrm because it was so
>>>>>> simple to use even though I have to work around the fact that reads are
>>>>>> uncached. A bigger problem that I have become aware of, is that it
>>>>>> restricts the dma buffers it can import since they have to be continous.
>>>>>>
>>>>>> So I looked to udl and it uses shmem. Fine, let's make a shmem gem
>>>>>> library similar to the cma library.
>>>>>>
>>>>>> Now I have done so and have started to think about the DOC: section,
>>>>>> explaining what the library does. And I'm stuck, what's the benefit of
>>>>>> using shmem compared to just using alloc_page()?
>>>>> Gives you swapping (and eventually maybe even migration) since there's
>>>>> a real filesystem behind it. Atm this only works if you register a
>>>>> shrinker callback, which for display drivers is a bit overkill. See
>>>>> i915 or msm for examples (or ttm, if you want an entire fancy
>>>>> framework), and git grep shrinker -- drivers/gpu.
>>>> The shrinker is only needed if you need some impetus to unbind objects
>>>> from your page tables, right?  If you're just binding the pages for the
>>>> moment that you're doing SPI transfers to the display, then in the
>>>> remaining time it could be swapped out, right?
>>> Yup, and for SPI the setup overhead shouldn't matter. But everyone else
>>> probably wants to cache mappings and page lists, and that means some kind
>>> of shrinker to drop them when needed.
>> Let me see if I've understood this correctly:
>>
>> The first time I call drm_gem_get_pages() the buffer pages are
>> allocated and pinned.
>> When I then call drm_gem_put_pages() the pages are unpinned, but not freed.
>> The kernel is now free to swap out the pages if necessary.
>> Calling drm_gem_get_pages() a second time will swapin the pages if
>> necessary and pin them.
>>
>> If this is correct, where are pages freed?
> drm_gem_object_release() during freeing of the object.
>

I see that you get the pages in vc5_bo_create() and put them in
vc5_free_object(). This means that you don't benefit from the shmem
"advantage" of swapping.
Why do you use shmem? Simplicity since it's built into DRM?

For me shmem has one drawback and that is fbdev deferred IO.
It doesn't work with shmem pages, since they compete over page->lru.
That requires me to use a shadow buffer for fbdev as a work around.
I can't use the shmem buffer directly.



More information about the dri-devel mailing list