[PATCH] drm/amdgpu/vcn: add shared menory restore after wake up from sleep.

Christian König ckoenig.leichtzumerken at gmail.com
Mon Apr 6 12:40:00 UTC 2020


Am 06.04.20 um 14:31 schrieb James Zhu:
>
> On 2020-04-06 3:03 a.m., Christian König wrote:
>> Am 03.04.20 um 17:54 schrieb James Zhu:
>>>
>>> On 2020-04-03 11:37 a.m., Alex Deucher wrote:
>>>> On Fri, Apr 3, 2020 at 8:52 AM James Zhu <James.Zhu at amd.com> wrote:
>>>>> VCN shared memory needs restore after wake up during S3 test.
>>>> How big is the shared memory?  It might be better to allocate the
>>>> memory once at sw_init and then free it in sw_fini rather than
>>>> allocating and freeing in every suspend/resume.
>>>
>>> Hi Alex,
>>>
>>> After alignment, it is only 4k. I can change it as you suggest.
>>
>> Does this needs to stay at the same place after a suspend/resume?
>>
>> See we only backup the firmware manually because we otherwise can't 
>> guarantee that it will be moved back to the same place after resume.
> Yes, this is the case.. FW request the same for their resume processing.
>> If that isn't an issue for the shared bo we could just unpin it on 
>> suspend and pin it again on resume.
>>
>> BTW: What is that used for and why can't it be part of the VCN 
>> firmware BO?
>
> Logically it is used for FW and driver sharing some settings 
> conveniently. If you suggest it can be added into VCN BO, then it will 
> simply the implementation.

As long as this is only used by the kernel driver it sounds like it is 
best put into the VCN BO as well, yes.

Regards,
Christian.

>
> Thanks and Best Regards!
>
> James Zhu
>
>>
>> Thanks,
>> Christian.
>>
>>>
>>> Best Regards!
>>>
>>> James
>>>
>>>>
>>>> Alex
>>>>
>>>>> Signed-off-by: James Zhu <James.Zhu at amd.com>
>>>>> ---
>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c | 26 
>>>>> ++++++++++++++++++++++++++
>>>>>   drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h |  1 +
>>>>>   2 files changed, 27 insertions(+)
>>>>>
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>> index d653a18..5891390 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.c
>>>>> @@ -205,6 +205,7 @@ int amdgpu_vcn_sw_fini(struct amdgpu_device 
>>>>> *adev)
>>>>>                  if (adev->vcn.harvest_config & (1 << j))
>>>>>                          continue;
>>>>>
>>>>> +               kvfree(adev->vcn.inst[j].saved_shm_bo);
>>>>> amdgpu_bo_free_kernel(&adev->vcn.inst[j].fw_shared_bo,
>>>>> &adev->vcn.inst[j].fw_shared_gpu_addr,
>>>>>                                            (void 
>>>>> **)&adev->vcn.inst[j].fw_shared_cpu_addr);
>>>>> @@ -254,6 +255,18 @@ int amdgpu_vcn_suspend(struct amdgpu_device 
>>>>> *adev)
>>>>>                          return -ENOMEM;
>>>>>
>>>>> memcpy_fromio(adev->vcn.inst[i].saved_bo, ptr, size);
>>>>> +
>>>>> +               if (adev->vcn.inst[i].fw_shared_bo == NULL)
>>>>> +                       return 0;
>>>>> +
>>>>> +               size = 
>>>>> amdgpu_bo_size(adev->vcn.inst[i].fw_shared_bo);
>>>>> +               ptr = adev->vcn.inst[i].fw_shared_cpu_addr;
>>>>> +
>>>>> +               adev->vcn.inst[i].saved_shm_bo = kvmalloc(size, 
>>>>> GFP_KERNEL);
>>>>> +               if (!adev->vcn.inst[i].saved_shm_bo)
>>>>> +                       return -ENOMEM;
>>>>> +
>>>>> + memcpy_fromio(adev->vcn.inst[i].saved_shm_bo, ptr, size);
>>>>>          }
>>>>>          return 0;
>>>>>   }
>>>>> @@ -291,6 +304,19 @@ int amdgpu_vcn_resume(struct amdgpu_device 
>>>>> *adev)
>>>>>                          }
>>>>>                          memset_io(ptr, 0, size);
>>>>>                  }
>>>>> +
>>>>> +               if (adev->vcn.inst[i].fw_shared_bo == NULL)
>>>>> +                       return -EINVAL;
>>>>> +
>>>>> +               size = 
>>>>> amdgpu_bo_size(adev->vcn.inst[i].fw_shared_bo);
>>>>> +               ptr = adev->vcn.inst[i].fw_shared_cpu_addr;
>>>>> +
>>>>> +               if (adev->vcn.inst[i].saved_shm_bo != NULL) {
>>>>> +                       memcpy_toio(ptr, 
>>>>> adev->vcn.inst[i].saved_shm_bo, size);
>>>>> + kvfree(adev->vcn.inst[i].saved_shm_bo);
>>>>> +                       adev->vcn.inst[i].saved_shm_bo = NULL;
>>>>> +               } else
>>>>> +                       memset_io(ptr, 0, size);
>>>>>          }
>>>>>          return 0;
>>>>>   }
>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h 
>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>>>>> index f739e1a..bd77dae 100644
>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vcn.h
>>>>> @@ -194,6 +194,7 @@ struct amdgpu_vcn_inst {
>>>>>          atomic_t                dpg_enc_submission_cnt;
>>>>>          void                    *fw_shared_cpu_addr;
>>>>>          uint64_t                fw_shared_gpu_addr;
>>>>> +       void                    *saved_shm_bo;
>>>>>   };
>>>>>
>>>>>   struct amdgpu_vcn {
>>>>> -- 
>>>>> 2.7.4
>>>>>
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx at lists.freedesktop.org
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7CJames.Zhu%40amd.com%7Ccf00d8be2e994e71381808d7d9f8a0d7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637217534184643911&sdata=pjIqtWZKO5768HmBOjH1fhjZMPScuyBUg%2FprxH2QWc4%3D&reserved=0 
>>>>>
>>> _______________________________________________
>>> amd-gfx mailing list
>>> amd-gfx at lists.freedesktop.org
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7CJames.Zhu%40amd.com%7Ccf00d8be2e994e71381808d7d9f8a0d7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637217534184643911&sdata=pjIqtWZKO5768HmBOjH1fhjZMPScuyBUg%2FprxH2QWc4%3D&reserved=0 
>>>
>>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx



More information about the amd-gfx mailing list