[PATCH v3 05/12] drm/ttm: Expose ttm_tt_unpopulate for driver use

Christian König christian.koenig at amd.com
Tue Nov 24 07:41:36 UTC 2020


Am 23.11.20 um 22:08 schrieb Andrey Grodzovsky:
>
> On 11/23/20 3:41 PM, Christian König wrote:
>> Am 23.11.20 um 21:38 schrieb Andrey Grodzovsky:
>>>
>>> On 11/23/20 3:20 PM, Christian König wrote:
>>>> Am 23.11.20 um 21:05 schrieb Andrey Grodzovsky:
>>>>>
>>>>> On 11/25/20 5:42 AM, Christian König wrote:
>>>>>> Am 21.11.20 um 06:21 schrieb Andrey Grodzovsky:
>>>>>>> It's needed to drop iommu backed pages on device unplug
>>>>>>> before device's IOMMU group is released.
>>>>>>
>>>>>> It would be cleaner if we could do the whole handling in TTM. I 
>>>>>> also need to double check what you are doing with this function.
>>>>>>
>>>>>> Christian.
>>>>>
>>>>>
>>>>> Check patch "drm/amdgpu: Register IOMMU topology notifier per 
>>>>> device." to see
>>>>> how i use it. I don't see why this should go into TTM mid-layer - 
>>>>> the stuff I do inside
>>>>> is vendor specific and also I don't think TTM is explicitly aware 
>>>>> of IOMMU ?
>>>>> Do you mean you prefer the IOMMU notifier to be registered from 
>>>>> within TTM
>>>>> and then use a hook to call into vendor specific handler ?
>>>>
>>>> No, that is really vendor specific.
>>>>
>>>> What I meant is to have a function like 
>>>> ttm_resource_manager_evict_all() which you only need to call and 
>>>> all tt objects are unpopulated.
>>>
>>>
>>> So instead of this BO list i create and later iterate in amdgpu from 
>>> the IOMMU patch you just want to do it within
>>> TTM with a single function ? Makes much more sense.
>>
>> Yes, exactly.
>>
>> The list_empty() checks we have in TTM for the LRU are actually not 
>> the best idea, we should now check the pin_count instead. This way we 
>> could also have a list of the pinned BOs in TTM.
>
>
> So from my IOMMU topology handler I will iterate the TTM LRU for the 
> unpinned BOs and this new function for the pinned ones  ?
> It's probably a good idea to combine both iterations into this new 
> function to cover all the BOs allocated on the device.

Yes, that's what I had in my mind as well.

>
>
>>
>> BTW: Have you thought about what happens when we unpopulate a BO 
>> while we still try to use a kernel mapping for it? That could have 
>> unforeseen consequences.
>
>
> Are you asking what happens to kmap or vmap style mapped CPU accesses 
> once we drop all the DMA backing pages for a particular BO ? Because 
> for user mappings
> (mmap) we took care of this with dummy page reroute but indeed nothing 
> was done for in kernel CPU mappings.

Yes exactly that.

In other words what happens if we free the ring buffer while the kernel 
still writes to it?

Christian.

>
> Andrey
>
>
>>
>> Christian.
>>
>>>
>>> Andrey
>>>
>>>
>>>>
>>>> Give me a day or two to look into this.
>>>>
>>>> Christian.
>>>>
>>>>>
>>>>> Andrey
>>>>>
>>>>>
>>>>>>
>>>>>>>
>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
>>>>>>> ---
>>>>>>>   drivers/gpu/drm/ttm/ttm_tt.c | 1 +
>>>>>>>   1 file changed, 1 insertion(+)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/ttm/ttm_tt.c 
>>>>>>> b/drivers/gpu/drm/ttm/ttm_tt.c
>>>>>>> index 1ccf1ef..29248a5 100644
>>>>>>> --- a/drivers/gpu/drm/ttm/ttm_tt.c
>>>>>>> +++ b/drivers/gpu/drm/ttm/ttm_tt.c
>>>>>>> @@ -495,3 +495,4 @@ void ttm_tt_unpopulate(struct ttm_tt *ttm)
>>>>>>>       else
>>>>>>>           ttm_pool_unpopulate(ttm);
>>>>>>>   }
>>>>>>> +EXPORT_SYMBOL(ttm_tt_unpopulate);
>>>>>>
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx at lists.freedesktop.org
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7CAndrey.Grodzovsky%40amd.com%7C9be029f26a4746347a6108d88fed299b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637417596065559955%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=tZ3do%2FeKzBtRlNaFbBjCtRvUHKdvwDZ7SoYhEBu4%2BT8%3D&reserved=0 
>>>>>
>>>>
>>



More information about the amd-gfx mailing list