[PATCH 01/12] drm/amdgpu: implement vm_operations_struct.access

Michel Dänzer michel at daenzer.net
Thu Jul 13 02:52:29 UTC 2017


On 13/07/17 02:27 AM, Felix Kuehling wrote:
> On 17-07-12 04:01 AM, Michel Dänzer wrote:
>> On 12/07/17 02:37 PM, Felix Kuehling wrote:
>>> Any comments on this one?
>>>
>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> index 15148f1..3f927c2 100644
>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
>>>> @@ -1237,6 +1237,134 @@ void amdgpu_ttm_set_active_vram_size(struct amdgpu_device *adev, u64 size)
>>>>  	man->size = size >> PAGE_SHIFT;
>>>>  }
>>>>  
>>>> +static struct vm_operations_struct amdgpu_ttm_vm_ops;
>>>> +static const struct vm_operations_struct *ttm_vm_ops /* = NULL;
>>>> +						      * (appease checkpatch) */;
>> How does this appease checkpatch?
> 
> Checkpatch doesn't like explicit initialization of global variables to
> 0. Uninitialized data is automatically 0, so no point wasting space in
> the initialized data segment.

That is true for static variables, so I suspect checkpatch explicitly
complains about initializing those to 0. Just remove the comment (and
the initialization).


>>>> +static int amdgpu_ttm_bo_access_kmap(struct amdgpu_bo *abo,
>>>> +				     unsigned long offset,
>>>> +				     void *buf, int len, int write)
>>>> +{
>>>> +	struct ttm_buffer_object *bo = &abo->tbo;
>>>> +	struct ttm_bo_kmap_obj map;
>>>> +	void *ptr;
>>>> +	bool is_iomem;
>>>> +	int r;
>>>> +
>>>> +	r = ttm_bo_kmap(bo, 0, bo->num_pages, &map);
>>>> +	if (r)
>>>> +		return r;
>>>> +	ptr = (uint8_t *)ttm_kmap_obj_virtual(&map, &is_iomem) + offset;
>>>> +	WARN_ON(is_iomem);
>>>> +	if (write)
>>>> +		memcpy(ptr, buf, len);
>>>> +	else
>>>> +		memcpy(buf, ptr, len);
>>>> +	ttm_bo_kunmap(&map);
>>>> +
>>>> +	return len;
>>>> +}
>> This could be in TTM, right? As a helper function and/or a generic
>> vm_operations_struct::access hook (at least for GTT/CPU domains).
> 
> I guess. If I were to get TTM involved, I'd also have TTM install its
> own access hook in the vm_ops, so the driver doesn't need to override
> it. Then I'd add driver-specific callbacks in ttm_bo_driver for
> accessing VRAM

So far, I was thinking basically the same thing.


> and GTT/CPU memory. TTM could offer something like
> amdgpu_ttm_bo_access_kmap as a helper for use by all drivers.

But this doesn't seem necessary, since GTT/CPU domain CPU access should
work the same independent of the driver.


-- 
Earthling Michel Dänzer               |               http://www.amd.com
Libre software enthusiast             |             Mesa and X developer


More information about the dri-devel mailing list