[PATCH 8/8] drm/ttm: do not keep GPU dependent addresses

Nirmoy nirmodas at amd.com
Tue Feb 18 18:30:56 UTC 2020


On 2/18/20 7:16 PM, Thomas Zimmermann wrote:
> Hi
>
> Am 18.02.20 um 18:13 schrieb Nirmoy:
>> On 2/18/20 1:44 PM, Christian König wrote:
>>> Am 18.02.20 um 13:40 schrieb Thomas Zimmermann:
>>>> Hi
>>>>
>>>> Am 17.02.20 um 16:04 schrieb Nirmoy Das:
>>>>> GPU address handling is device specific and should be handle by its
>>>>> device
>>>>> driver.
>>>>>
>>>>> Signed-off-by: Nirmoy Das <nirmoy.das at amd.com>
>>>>> ---
>>>>>    drivers/gpu/drm/ttm/ttm_bo.c    | 7 -------
>>>>>    include/drm/ttm/ttm_bo_api.h    | 2 --
>>>>>    include/drm/ttm/ttm_bo_driver.h | 1 -
>>>>>    3 files changed, 10 deletions(-)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> index 151edfd8de77..d5885cd609a3 100644
>>>>> --- a/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo.c
>>>>> @@ -85,7 +85,6 @@ static void ttm_mem_type_debug(struct
>>>>> ttm_bo_device *bdev, struct drm_printer *p
>>>>>        drm_printf(p, "    has_type: %d\n", man->has_type);
>>>>>        drm_printf(p, "    use_type: %d\n", man->use_type);
>>>>>        drm_printf(p, "    flags: 0x%08X\n", man->flags);
>>>>> -    drm_printf(p, "    gpu_offset: 0x%08llX\n", man->gpu_offset);
>>>>>        drm_printf(p, "    size: %llu\n", man->size);
>>>>>        drm_printf(p, "    available_caching: 0x%08X\n",
>>>>> man->available_caching);
>>>>>        drm_printf(p, "    default_caching: 0x%08X\n",
>>>>> man->default_caching);
>>>>> @@ -345,12 +344,6 @@ static int ttm_bo_handle_move_mem(struct
>>>>> ttm_buffer_object *bo,
>>>>>    moved:
>>>>>        bo->evicted = false;
>>>>>    -    if (bo->mem.mm_node)
>>>>> -        bo->offset = (bo->mem.start << PAGE_SHIFT) +
>>>>> -            bdev->man[bo->mem.mem_type].gpu_offset;
>>>>> -    else
>>>>> -        bo->offset = 0;
>>>>> -
>>>> After moving this into users, the else branch has been lost. Is
>>>> 'bo->mem.mm_node' always true?
>>> At least for the amdgpu and radeon use cases, yes.
>>>
>>> But that is a rather good question I mean for it is illegal to get the
>>> GPU BO address if it is inaccessible (e.g. in the system domain).
>>>
>>> Could be that some driver relied on the behavior to get 0 for the
>>> system domain here.
>> I wonder how to verify that ?
>>
>> If I understand correctly:
>>
>> 1 qxl uses bo->offset only in qxl_bo_physical_address() which is not in
>> system domain.
>>
>> 2 unfortunately I can't say the same for bochs but it works with this
>> patch series so I think bochs is fine as well.
>>
>> 3 vmwgfx uses bo->offset only when bo->mem.mem_type == TTM_PL_VRAM so
>> vmwgfx should be fine.
>>
>> 4 amdgpu and radeon runs with 'bo->mem.mm_node' always true
>>
>> I am not sure about  nouveau as bo->offset is being used in many places.
>>
>> I could probably mirror the removed logic to nouveau as
> I suggest to introduce a ttm helper that contains the original branching
> and use it everywhere. Something like
>
>    s64 ttm_bo_offset(struct ttm_buffer_object *bo)
>    {
>       if (WARN_ON_ONCE(!bo->mem.mm_node))
>           return 0;
>       return bo->mem.start << PAGE_SHIFT;
>    }
>
> Could be static inline. The warning should point to broken drivers. This
> also gets rid of the ugly shift in the drivers.

The idea here is to remove this GPU memory offset calculation and 
reference completely

from ttm and let drivers decide how to handle that. Drivers are full of 
ugly shift anyway imho :)

Regards,

Nirmoy

>
> Best regards
> Thomas
>
>
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c
>> b/drivers/gpu/drm/nouveau/nouveau_bo.c
>> index f8015e0318d7..5a6a2af91318 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
>> @@ -1317,6 +1317,10 @@ nouveau_bo_move_ntfy(struct ttm_buffer_object
>> *bo, bool evict,
>>                  list_for_each_entry(vma, &nvbo->vma_list, head) {
>>                          nouveau_vma_map(vma, mem);
>>                  }
>> +               if (bo->mem.mm_node)
>> +                       nvbo->offset = (new_reg->start << PAGE_SHIFT);
>> +               else
>> +                       nvbo->offset = 0;
>>          } else {
>>                  list_for_each_entry(vma, &nvbo->vma_list, head) {
>>                          WARN_ON(ttm_bo_wait(bo, false, false));
>>
>> Regards,
>>
>> Nirmoy
>>
>>
>>> Regards,
>>> Christian.
>>>
>>>> Best regards
>>>> Thomas
>>>>
>>>>>        ctx->bytes_moved += bo->num_pages << PAGE_SHIFT;
>>>>>        return 0;
>>>>>    diff --git a/include/drm/ttm/ttm_bo_api.h
>>>>> b/include/drm/ttm/ttm_bo_api.h
>>>>> index b9bc1b00142e..d6f39ee5bf5d 100644
>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>> @@ -213,8 +213,6 @@ struct ttm_buffer_object {
>>>>>         * either of these locks held.
>>>>>         */
>>>>>    -    uint64_t offset; /* GPU address space is independent of CPU
>>>>> word size */
>>>>> -
>>>>>        struct sg_table *sg;
>>>>>    };
>>>>>    diff --git a/include/drm/ttm/ttm_bo_driver.h
>>>>> b/include/drm/ttm/ttm_bo_driver.h
>>>>> index c9e0fd09f4b2..c8ce6c181abe 100644
>>>>> --- a/include/drm/ttm/ttm_bo_driver.h
>>>>> +++ b/include/drm/ttm/ttm_bo_driver.h
>>>>> @@ -177,7 +177,6 @@ struct ttm_mem_type_manager {
>>>>>        bool has_type;
>>>>>        bool use_type;
>>>>>        uint32_t flags;
>>>>> -    uint64_t gpu_offset; /* GPU address space is independent of CPU
>>>>> word size */
>>>>>        uint64_t size;
>>>>>        uint32_t available_caching;
>>>>>        uint32_t default_caching;
>>>>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel


More information about the amd-gfx mailing list