[PATCH v20 09/10] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
Thomas Zimmermann
tzimmermann at suse.de
Fri Apr 4 08:01:50 UTC 2025
Hi
Am 03.04.25 um 10:50 schrieb Boris Brezillon:
> On Thu, 3 Apr 2025 09:20:00 +0200
> Thomas Zimmermann <tzimmermann at suse.de> wrote:
>
>> Hi
>>
>> Am 02.04.25 um 15:21 schrieb Boris Brezillon:
>>> On Wed, 2 Apr 2025 15:58:55 +0300
>>> Dmitry Osipenko <dmitry.osipenko at collabora.com> wrote:
>>>
>>>> On 4/2/25 15:47, Thomas Zimmermann wrote:
>>>>> Hi
>>>>>
>>>>> Am 22.03.25 um 22:26 schrieb Dmitry Osipenko:
>>>>>> The vmapped pages shall be pinned in memory and previously get/
>>>>>> put_pages()
>>>>>> were implicitly hard-pinning/unpinning the pages. This will no longer be
>>>>>> the case with addition of memory shrinker because pages_use_count > 0
>>>>>> won't
>>>>>> determine anymore whether pages are hard-pinned (they will be soft-
>>>>>> pinned),
>>>>>> while the new pages_pin_count will do the hard-pinning. Switch the
>>>>>> vmap/vunmap() to use pin/unpin() functions in a preparation of addition
>>>>>> of the memory shrinker support to drm-shmem.
>>>>> I've meanwhile rediscovered this patch and I'm sure this is not correct.
>>>>> Vmap should not pin AFAIK. It is possible to vmap if the buffer has been
>>>>> pinned, but that's not automatic. For other vmaps it is necessary to
>>>>> hold the reservation lock to prevent the buffer from moving.
>>> Hm, is this problematic though? If you want to vmap() inside a section
>>> that's protected by the resv lock, you can
>>>
>>> - drm_gem_shmem_vmap_locked()
>>> - do whatever you need to do with the vaddr,
>>> - drm_gem_shmem_vunmap_locked()
>>>
>>> and the {pin,page_use}_count will be back to their original values.
>>> Those are just ref counters, and I doubt the overhead of
>>> incrementing/decrementing them makes a difference compared to the heavy
>>> page-allocation/vmap operations...
>> I once tried to add pin as part of vmap, so that pages stay in place.
>> Christian was very clear about not doing this. I found this made a lot
>> of sense: vmap means "make the memory available to the CPU". The memory
>> location doesn't matter much here. Pin means something like "make the
>> memory available to the GPU". But which GPU depends on the caller: calls
>> via GEM refer to the local GPU, calls via dma-buf usually refer to the
>> importer's GPU. That GPU uncertainty makes pin problematic already.
> Okay, so it looks more like a naming issue then. The intent here is to
It's certainly possible to see this as a problem naming.
> make sure the page array doesn't disappear while we have a kernel
> mapping active (address returned by vmap()). The reason we went from
> pages_count to pages_use_count+pin_count is because we have two kind of
> references in drm_gem_shmem:
>
> - weak references (tracked with pages_use_count). Those are
> usually held by GPU VMs, and they are weak in the sense they
> shouldn't prevent the shrinker to reclaim them if the GPU VM is idle.
> The other user of weak references is userspace mappings of GEM
> objects (mmap()), because then we can repopulate those with our fault
> handler.
> - hard references (tracked with pin_count) which are used to prevent
> the shrinker from even considering the GEM as reclaimable. And clearly
> kernel mappings fall in that case, because otherwise we could reclaim
> pages that might be dereferenced by the CPU later on. It's also used
> to implement drm_gem_pin because it's the same mechanism really,
> hence the name
Yeah, this should be rename IMHO. Pin is a TTM operation that fixes
buffers in certain locations. Drivers do this internally. It has nothing
to do with gem-shmem.
There's also a pin operation in GEM BOs' drm_gem_object_funcs, but it is
only called for PRIME-exported buffers and not for general use. For
gem-shmem, the callback would be implemented on top of the hard references.
And there's also a pin in dma_buf_ops. The term 'pin' is somewhat
overloaded already.
>
>> In your case, vmap an pin both intent to hold the shmem pages in memory.
>> They might be build on top of the same implementation, but one should
>> not be implemented with the other because of their different meanings.
> But that's not what we do, is it? Sure, in drm_gem_shmem_vmap_locked(),
> we call drm_gem_shmem_pin_locked(), but that's an internal function to
> make sure the pages are allocated and stay around until
> drm_gem_shmem_vunmap_locked() is called.
>
> I guess we could rename pin_count into hard_refcount or
> page_residency_count or xxx_count, and change the pin/unpin_locked()
> function names accordingly, but that's just a naming detail, it doesn't
> force you to call drm_gem_pin() to vmap() your GEM, it's something we
> do internally.
Such a rename would be much appreciated. page_residency_count seems
appropriate.
>
>> More generally speaking, I've meanwhile come to the conclusion that pin
>> should not even exist in the GEM interface. It's an internal operation
>> of TTM and reveals too much about what happens within the
>> implementation. Instead GEM should be free to move buffers around.
> Well, yes and no. There are situations where you simply can't move
> things around if there are active users, and vmap() is one of those
> AFAICT.
Sure. What I mean here is that pin/unpin is something of an
implementation detail. IMHO the pin/unpin callbacks in
drm_gem_object_funcs should get different names, such as pin_exported.
They are not for general use.
>
>> Dma-buf importers should only tell exporters to make buffers available
>> to them, but not how to do this. AFAIK that's what dma-buf's
>> attach/detach is for.
> And that's what they do, no? attach() tells the exporter to give the
> importer a way to access those buffers, and given the exporter has no
> clue about when/how the exporter will access those, there's no other way
> but to pin the pages. Am I missing something here?
Yeah, that's what they do.
Best regards
Thomas
--
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Frankenstrasse 146, 90461 Nuernberg, Germany
GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
HRB 36809 (AG Nuernberg)
More information about the dri-devel
mailing list