[PATCH 4/7] drm/radeon: Pin buffers while they are vmap'ed
Christian König
christian.koenig at amd.com
Tue Nov 24 11:54:29 UTC 2020
Am 24.11.20 um 12:44 schrieb Thomas Zimmermann:
> Hi
>
> Am 24.11.20 um 12:30 schrieb Christian König:
>> Am 24.11.20 um 10:16 schrieb Thomas Zimmermann:
>>> Hi Christian
>>>
>>> Am 16.11.20 um 12:28 schrieb Christian König:
>>>> Am 13.11.20 um 08:59 schrieb Thomas Zimmermann:
>>>>> Hi Christian
>>>>>
>>>>> Am 12.11.20 um 18:16 schrieb Christian König:
>>>>>> Am 12.11.20 um 14:21 schrieb Thomas Zimmermann:
>>>>>>> In order to avoid eviction of vmap'ed buffers, pin them in their
>>>>>>> GEM
>>>>>>> object's vmap implementation. Unpin them in the vunmap
>>>>>>> implementation.
>>>>>>> This is needed to make generic fbdev support work reliably.
>>>>>>> Without,
>>>>>>> the buffer object could be evicted while fbdev flushed its
>>>>>>> shadow buffer.
>>>>>>>
>>>>>>> In difference to the PRIME pin/unpin functions, the vmap code
>>>>>>> does not
>>>>>>> modify the BOs prime_shared_count, so a vmap-pinned BO does not
>>>>>>> count as
>>>>>>> shared.
>>>>>>>
>>>>>>> The actual pin location is not important as the vmap call returns
>>>>>>> information on how to access the buffer. Callers that require a
>>>>>>> specific location should explicitly pin the BO before vmapping it.
>>>>>> Well is the buffer supposed to be scanned out?
>>>>> No, not by the fbdev helper.
>>>>
>>>> Ok in this case that should work.
>>>>
>>>>>> If yes then the pin location is actually rather important since the
>>>>>> hardware can only scan out from VRAM.
>>>>> For relocatable BOs, fbdev uses a shadow buffer that makes all any
>>>>> relocation transparent to userspace. It flushes the shadow fb into
>>>>> the
>>>>> BO's memory if there are updates. The code is in
>>>>> drm_fb_helper_dirty_work(). [1] During the flush operation, the vmap
>>>>> call now pins the BO to wherever it is. The actual location does not
>>>>> matter. It's vunmap'ed immediately afterwards.
>>>>
>>>> The problem is what happens when it is prepared for scanout, but
>>>> can't be moved to VRAM because it is vmapped?
>>>>
>>>> When the shadow is never scanned out that isn't a problem, but we
>>>> need to keep that in mind.
>>>>
>>>
>>> I'd like ask for your suggestions before sending an update for this
>>> patch.
>>>
>>> After the discussion about locking in fbdev, [1] I intended to
>>> replace the pin call with code that acquires the reservation lock.
>>
>> Yeah, that sounds like a good idea to me as well.
>>
>>> First I wanted to put this into drm_gem_ttm_vmap/vunmap(), but then
>>> wondered why ttm_bo_vmap() doe not acquire the lock internally? I'd
>>> expect that vmap/vunmap are close together and do not overlap for
>>> the same BO.
>>
>> We have use cases like the following during command submission:
>>
>> 1. lock
>> 2. map
>> 3. copy parts of the BO content somewhere else or patch it with
>> additional information
>> 4. unmap
>> 5. submit BO to the hardware
>> 6. add hardware fence to the BO to make sure it doesn't move
>> 7. unlock
>>
>> That use case won't be possible with vmap/vunmap if we move the
>> lock/unlock into it and I hope to replace the kmap/kunmap functions
>> with them in the near term.
>>
>>> Otherwise, acquiring the reservation lock would require another
>>> ref-counting variable or per-driver code.
>>
>> Hui, why that? Just put this into drm_gem_ttm_vmap/vunmap() helper as
>> you initially planned.
>
> Given your example above, step one would acquire the lock, and step
> two would also acquire the lock as part of the vmap implementation.
> Wouldn't this fail (At least during unmap or unlock steps) ?
Oh, so you want to nest them? No, that is a rather bad no-go.
You need to make sure that the lock is only taken from the FB path which
wants to vmap the object.
Why don't you lock the GEM object from the caller in the generic FB
implementation?
Regards,
Christian.
>
> Best regards
> Thomas
>
>>
>> Regards,
>> Christian.
>>
>>>
>>> Best regards
>>> Thomas
>>>
>>> [1] https://patchwork.freedesktop.org/patch/401088/?series=83918&rev=1
>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>>>
>>>>> For dma-buf sharing, the regular procedure of pin + vmap still apply.
>>>>> This should always move the BO into GTT-managed memory.
>>>>>
>>>>> Best regards
>>>>> Thomas
>>>>>
>>>>> [1]
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.git%2Ftree%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_fb_helper.c%23n432&data=04%7C01%7Cchristian.koenig%40amd.com%7C31b890664ca7429fc45808d887aa0842%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637408511650629569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RLauuAuXkcl0rXwWWJ%2FrKP%2BsCr2wAzU1ejGV1bnQ80w%3D&reserved=0
>>>>>
>>>>>
>>>>>> Regards,
>>>>>> Christian.
>>>>>>
>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de>
>>>>>>> ---
>>>>>>> drivers/gpu/drm/radeon/radeon_gem.c | 51
>>>>>>> +++++++++++++++++++++++++++--
>>>>>>> 1 file changed, 49 insertions(+), 2 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> index d2876ce3bc9e..eaf7fc9a7b07 100644
>>>>>>> --- a/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
>>>>>>> @@ -226,6 +226,53 @@ static int radeon_gem_handle_lockup(struct
>>>>>>> radeon_device *rdev, int r)
>>>>>>> return r;
>>>>>>> }
>>>>>>> +static int radeon_gem_object_vmap(struct drm_gem_object *obj,
>>>>>>> struct dma_buf_map *map)
>>>>>>> +{
>>>>>>> + static const uint32_t any_domain = RADEON_GEM_DOMAIN_VRAM |
>>>>>>> + RADEON_GEM_DOMAIN_GTT |
>>>>>>> + RADEON_GEM_DOMAIN_CPU;
>>>>>>> +
>>>>>>> + struct radeon_bo *bo = gem_to_radeon_bo(obj);
>>>>>>> + int ret;
>>>>>>> +
>>>>>>> + ret = radeon_bo_reserve(bo, false);
>>>>>>> + if (ret)
>>>>>>> + return ret;
>>>>>>> +
>>>>>>> + /* pin buffer at its current location */
>>>>>>> + ret = radeon_bo_pin(bo, any_domain, NULL);
>>>>>>> + if (ret)
>>>>>>> + goto err_radeon_bo_unreserve;
>>>>>>> +
>>>>>>> + ret = drm_gem_ttm_vmap(obj, map);
>>>>>>> + if (ret)
>>>>>>> + goto err_radeon_bo_unpin;
>>>>>>> +
>>>>>>> + radeon_bo_unreserve(bo);
>>>>>>> +
>>>>>>> + return 0;
>>>>>>> +
>>>>>>> +err_radeon_bo_unpin:
>>>>>>> + radeon_bo_unpin(bo);
>>>>>>> +err_radeon_bo_unreserve:
>>>>>>> + radeon_bo_unreserve(bo);
>>>>>>> + return ret;
>>>>>>> +}
>>>>>>> +
>>>>>>> +static void radeon_gem_object_vunmap(struct drm_gem_object *obj,
>>>>>>> struct dma_buf_map *map)
>>>>>>> +{
>>>>>>> + struct radeon_bo *bo = gem_to_radeon_bo(obj);
>>>>>>> + int ret;
>>>>>>> +
>>>>>>> + ret = radeon_bo_reserve(bo, false);
>>>>>>> + if (ret)
>>>>>>> + return;
>>>>>>> +
>>>>>>> + drm_gem_ttm_vunmap(obj, map);
>>>>>>> + radeon_bo_unpin(bo);
>>>>>>> + radeon_bo_unreserve(bo);
>>>>>>> +}
>>>>>>> +
>>>>>>> static const struct drm_gem_object_funcs
>>>>>>> radeon_gem_object_funcs = {
>>>>>>> .free = radeon_gem_object_free,
>>>>>>> .open = radeon_gem_object_open,
>>>>>>> @@ -234,8 +281,8 @@ static const struct drm_gem_object_funcs
>>>>>>> radeon_gem_object_funcs = {
>>>>>>> .pin = radeon_gem_prime_pin,
>>>>>>> .unpin = radeon_gem_prime_unpin,
>>>>>>> .get_sg_table = radeon_gem_prime_get_sg_table,
>>>>>>> - .vmap = drm_gem_ttm_vmap,
>>>>>>> - .vunmap = drm_gem_ttm_vunmap,
>>>>>>> + .vmap = radeon_gem_object_vmap,
>>>>>>> + .vunmap = radeon_gem_object_vunmap,
>>>>>>> };
>>>>>>> /*
>>>>>> _______________________________________________
>>>>>> dri-devel mailing list
>>>>>> dri-devel at lists.freedesktop.org
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C31b890664ca7429fc45808d887aa0842%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637408511650629569%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=h1U9Po83K7webxsiKpn3ZGFz9Fcg6SRkxtrXWZ1%2B%2FEc%3D&reserved=0
>>>>>>
>>>>
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> dri-devel at lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>
>>
>
More information about the dri-devel
mailing list