[PATCH 2/3] drm/amdgpu: refresh per vm bo lru

Chunming Zhou zhoucm1 at amd.com
Thu Mar 29 12:46:46 UTC 2018



在 2018/3/29 16:59, Christian König 写道:
> Am 29.03.2018 um 10:37 schrieb zhoucm1:
>>
>>
>>
>> On 2018年03月28日 16:13, zhoucm1 wrote:
>>>
>>>
>>>
>>> On 2018年03月27日 21:44, Christian König wrote:
>>>
>>>> How about we update the LRU only when we need to re-validate at 
>>>> least one BO?
>>> I tried this just now, performance still isn't stable, sometime drop 
>>> to 28fps by accident.
>
> Can you give me the code for that? I probably can't work this week on 
> that, but I can take a look next week.
just git send-mail to your amd mail.

>
>>>
>>> I also tried to check num_evictions, if eviction happens, then 
>>> update LRU, also sometime drop to 28fps by accident.
>>>
>>> When BOs change, we not only need keep LRU order, but also 
>>> validation order in vm->evicted list. Any other ideas which can keep 
>>> these order but not increase submission overhead?
>>
>> With more thinking, we need to add new LRU design for per vm bo, we 
>> need to make sure the order when adding to LRU. How about the below idea:
>> 0. separate traditional bo list lru and per-vm-bo lru. Traditional 
>> lru keeps old way, per-vm-lru follows below design.
>> 1. TTM bdev maintains a vm/process list.
>> 2. Every vm_list node contains its own per-vm-bo LRU[priority]
>> 3. To manage the vm_list lru in specific driver, we will need add 
>> callback for it.
>> 4. We will add an order for every per-vm-bo in that vm/process.
>> 5. To speed up per-vm-lru sort, we will introduce RB tree for it in 
>> callback. The RB tree key is order.
>>
>> This way, we will be able to keep the per-vm-bo LRU order.
>>
>> What do you think of it?
>
> No, we need a single LRU for per VM and not per VM BOs to maintain 
> eviction fairness, so we don't really win anything with that.
If following original LRU design, the bo should be moved to lru tail 
when bo is used, so that keep the last used bo is in lru tail.
All per vm BOs are used for every command submission, then after every 
CS, we should refresh the lru, that is required by original LRU design, 
but as your NAK on it, it will introduce much CPU overhead for CS, they 
are inconsistencies.

For per vm case, if we don't want to introduce extra overhead, the 
per-vm-bo order shoud fixed in lru to avoid refresh LRU for every CS.
so my thinking for lru is:
VM1-BO1---->BO2--->BO3--->BOn--->VM2-BO1--->BO2--->BO3--->BOn--->VM3-BO...

Regards,
David
>
> Regards,
> Christian.
>
>>
>> Regards,
>> David Zhou
>>>
>>> Regards,
>>> David Zhou
>>>>
>>>> BTW: We can easily walk all BOs which belong to a VM, skipping over 
>>>> the few which aren't per VM BOs should be trivial.
>>>>
>>>> Christian.
>>>>
>>>> Am 27.03.2018 um 13:56 schrieb Zhou, David(ChunMing):
>>>>> then how to keep unique lru order? any ideas?
>>>>>
>>>>> To stable performance, we have to keep unique lru order, otherwise 
>>>>> like the issue I look into, sometimes F1game is 40fps, sometimes 
>>>>> 28fps...even re-validate allowed domains BO.
>>>>>
>>>>> The left root cause is the moved BOs are not same.
>>>>>
>>>>> send from Smartisan Pro
>>>>>
>>>>> Christian K鰊ig <ckoenig.leichtzumerken at gmail.com> 于 2018年3月27日 
>>>>> 下午6:50写道:
>>>>>
>>>>> NAK, we already tried that and it is really not a good idea 
>>>>> because it
>>>>> massively increases the per submission overhead.
>>>>>
>>>>> Christian.
>>>>>
>>>>> Am 27.03.2018 um 12:16 schrieb Chunming Zhou:
>>>>> > Change-Id: Ibad84ed585b0746867a5f4cd1eadc2273e7cf596
>>>>> > Signed-off-by: Chunming Zhou <david1.zhou at amd.com>
>>>>> > ---
>>>>> >   drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c |  2 ++
>>>>> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 15 +++++++++++++++
>>>>> >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |  1 +
>>>>> >   3 files changed, 18 insertions(+)
>>>>> >
>>>>> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>>> > index 383bf2d31c92..414e61799236 100644
>>>>> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>>> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
>>>>> > @@ -919,6 +919,8 @@ static int amdgpu_bo_vm_update_pte(struct 
>>>>> amdgpu_cs_parser *p)
>>>>> >                }
>>>>> >        }
>>>>> >
>>>>> > +     amdgpu_vm_refresh_lru(adev, vm);
>>>>> > +
>>>>> >        return r;
>>>>> >   }
>>>>> >
>>>>> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> > index 5e35e23511cf..8ad2bb705765 100644
>>>>> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
>>>>> > @@ -1902,6 +1902,21 @@ struct amdgpu_bo_va 
>>>>> *amdgpu_vm_bo_add(struct amdgpu_device *adev,
>>>>> >        return bo_va;
>>>>> >   }
>>>>> >
>>>>> > +void amdgpu_vm_refresh_lru(struct amdgpu_device *adev, struct 
>>>>> amdgpu_vm *vm)
>>>>> > +{
>>>>> > +     struct ttm_bo_global *glob = adev->mman.bdev.glob;
>>>>> > +     struct amdgpu_vm_bo_base *bo_base;
>>>>> > +
>>>>> > +     spin_lock(&vm->status_lock);
>>>>> > +     list_for_each_entry(bo_base, &vm->vm_bo_list, vm_bo) {
>>>>> > + spin_lock(&glob->lru_lock);
>>>>> > + ttm_bo_move_to_lru_tail(&bo_base->bo->tbo);
>>>>> > +             if (bo_base->bo->shadow)
>>>>> > + ttm_bo_move_to_lru_tail(&bo_base->bo->shadow->tbo);
>>>>> > + spin_unlock(&glob->lru_lock);
>>>>> > +     }
>>>>> > +     spin_unlock(&vm->status_lock);
>>>>> > +}
>>>>> >
>>>>> >   /**
>>>>> >    * amdgpu_vm_bo_insert_mapping - insert a new mapping
>>>>> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> > index 1886a561c84e..e01895581489 100644
>>>>> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
>>>>> > @@ -285,6 +285,7 @@ int amdgpu_vm_clear_freed(struct 
>>>>> amdgpu_device *adev,
>>>>> >                          struct dma_fence **fence);
>>>>> >   int amdgpu_vm_handle_moved(struct amdgpu_device *adev,
>>>>> >                           struct amdgpu_vm *vm);
>>>>> > +void amdgpu_vm_refresh_lru(struct amdgpu_device *adev, struct 
>>>>> amdgpu_vm *vm);
>>>>> >   int amdgpu_vm_bo_update(struct amdgpu_device *adev,
>>>>> >                        struct amdgpu_bo_va *bo_va,
>>>>> >                        bool clear);
>>>>>
>>>>
>>>
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/amd-gfx/attachments/20180329/d147f70c/attachment-0001.html>


More information about the amd-gfx mailing list