Re: 答复: [PATCH 2/2] drm/amdgpu: save/restore uvd fence sequence number in suspend/resume
Christian König
christian.koenig at amd.com
Thu Dec 14 14:02:27 UTC 2017
Hi Jim,
ah yes, we dropped that because it shouldn't be necessary any more on
amdgpu. The fences are written into a GTT BO and the content of that
should be preserved even over a suspend & resume cycle.
But there is an issue with that. Taking a look at
amdgpu_fence_driver_start_ring() we still have the hack for ancient UVD
versions to put the fence directly behind the UVD firmware instead of
into the GTT BO.
Can you try if it works as well if you disabled that? The workaround is
most likely not needed any more with modern firmware.
If that doesn't work you could try to call
amdgpu_fence_driver_force_completion() in the UVD resume code that
should have the same effect as your proposed patch, but is far less code.
Regards,
Christian.
Am 14.12.2017 um 14:50 schrieb Qu, Jim:
> Hi Christian,
>
> I remember the amdgpu_fence_driver_start_ring() function is called by amdgpu_ring_init (), so the function should never be called in amdgpu_device_resume().
>
> Thanks
> JimQu
> -----邮件原件-----
> 发件人: Christian König [mailto:ckoenig.leichtzumerken at gmail.com]
> 发送时间: 2017年12月14日 20:57
> 收件人: Qu, Jim <Jim.Qu at amd.com>; amd-gfx at lists.freedesktop.org
> 主题: Re: [PATCH 2/2] drm/amdgpu: save/restore uvd fence sequence number in suspend/resume
>
> Am 14.12.2017 um 12:38 schrieb Jim Qu:
>> otherwise, uvd block will be never powered up in ring begin_use()
>> callback. uvd ring test will be fail in resume in rumtime pm.
> NAK, that should already be done by amdgpu_fence_driver_start_ring().
>
> If this doesn't work please try to figure out why
> amdgpu_fence_driver_start_ring() isn't called during resume (Or if it is called, but not in the right order or whatever really goes wrong here).
>
> Regards,
> Christian.
>
>> Change-Id: I71b6c00bad174c90e12628e6037dc04a4ff9d9f2
>> Signed-off-by: Jim Qu <Jim.Qu at amd.com>
>> ---
>> drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 10 ++++++++--
>> drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h | 1 +
>> 2 files changed, 9 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>> index 343b682..a2d0b84 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
>> @@ -293,6 +293,7 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)
>> unsigned size;
>> void *ptr;
>> int i;
>> + struct amdgpu_fence_driver *drv = &adev->uvd.ring.fence_drv;
>>
>> cancel_delayed_work_sync(&adev->uvd.idle_work);
>>
>> @@ -303,9 +304,11 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)
>> if (atomic_read(&adev->uvd.handles[i]))
>> break;
>>
>> - if (i == AMDGPU_MAX_UVD_HANDLES)
>> + if (i == AMDGPU_MAX_UVD_HANDLES) {
>> + if (drv->cpu_addr)
>> + adev->uvd.fence_seq = le32_to_cpu(*drv->cpu_addr);
>> return 0;
>> -
>> + }
>> size = amdgpu_bo_size(adev->uvd.vcpu_bo);
>> ptr = adev->uvd.cpu_addr;
>>
>> @@ -322,6 +325,7 @@ int amdgpu_uvd_resume(struct amdgpu_device *adev)
>> {
>> unsigned size;
>> void *ptr;
>> + struct amdgpu_fence_driver *drv = &adev->uvd.ring.fence_drv;
>>
>> if (adev->uvd.vcpu_bo == NULL)
>> return -EINVAL;
>> @@ -346,6 +350,8 @@ int amdgpu_uvd_resume(struct amdgpu_device *adev)
>> ptr += le32_to_cpu(hdr->ucode_size_bytes);
>> }
>> memset_io(ptr, 0, size);
>> + if (drv->cpu_addr)
>> + *drv->cpu_addr = le32_to_cpu(adev->uvd.fence_seq);
>> }
>>
>> return 0;
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
>> index 32ea20b..88f6db9 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.h
>> @@ -55,6 +55,7 @@ struct amdgpu_uvd {
>> struct drm_sched_entity entity_enc;
>> uint32_t srbm_soft_reset;
>> unsigned num_enc_rings;
>> + uint32_t fence_seq;
>> };
>>
>> int amdgpu_uvd_sw_init(struct amdgpu_device *adev);
More information about the amd-gfx
mailing list