[PATCH 01/10] drm/amdgpu: pin the csb buffer on hw init
Quan, Evan
Evan.Quan at amd.com
Fri Jul 6 09:10:24 UTC 2018
Comment inline
> -----Original Message-----
> From: Alex Deucher [mailto:alexdeucher at gmail.com]
> Sent: Thursday, July 05, 2018 11:25 PM
> To: Huang, Ray <Ray.Huang at amd.com>
> Cc: Quan, Evan <Evan.Quan at amd.com>; amd-gfx list <amd-
> gfx at lists.freedesktop.org>
> Subject: Re: [PATCH 01/10] drm/amdgpu: pin the csb buffer on hw init
>
> On Thu, Jul 5, 2018 at 11:25 AM, Huang Rui <ray.huang at amd.com> wrote:
> > On Thu, Jul 05, 2018 at 05:09:26PM +0800, Evan Quan wrote:
> >> Without this pin, the csb buffer will be filled with inconsistent
> >> data after S3 resume. And that will causes gfx hang on gfxoff exit
> >> since this csb will be executed then.
> >>
> >> Change-Id: I1ae1f2eed096eaba5f601cf2a3e2650c8e583dc9
> >> Signed-off-by: Evan Quan <evan.quan at amd.com>
> >
> > It is nice to have the comments behind of csb_vram_pin function to
> > explain why we need "pin" here during resume phase.
> >
> > Reviewed-by: Huang Rui <ray.huang at amd.com>
>
> Do the save restore buffers in gfx7 and 8 need a similar fix?
>
> Alex
It seems so. Test on polaris11, the csb is corrupt after resume from S3.
I will send another patch to fix them.
> >
> >> ---
> >> drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 40
> >> +++++++++++++++++++++++++++
> >> 1 file changed, 40 insertions(+)
> >>
> >> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> >> b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> >> index ac46eabe3bcd..65cc30766658 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> >> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
> >> @@ -943,6 +943,7 @@ static int gfx_v9_0_rlc_init(struct amdgpu_device
> *adev)
> >> dst_ptr = adev->gfx.rlc.cs_ptr;
> >> gfx_v9_0_get_csb_buffer(adev, dst_ptr);
> >> amdgpu_bo_kunmap(adev->gfx.rlc.clear_state_obj);
> >> + amdgpu_bo_unpin(adev->gfx.rlc.clear_state_obj);
> >> amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj);
> >> }
> >>
> >> @@ -971,6 +972,39 @@ static int gfx_v9_0_rlc_init(struct amdgpu_device
> *adev)
> >> return 0;
> >> }
> >>
> >> +static int gfx_v9_0_csb_vram_pin(struct amdgpu_device *adev) {
> >> + uint64_t gpu_addr;
> >> + int r;
> >> +
> >> + r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, false);
> >> + if (unlikely(r != 0))
> >> + return r;
> >> +
> >> + r = amdgpu_bo_pin(adev->gfx.rlc.clear_state_obj,
> >> + AMDGPU_GEM_DOMAIN_VRAM, &gpu_addr);
> >> + if (!r)
> >> + adev->gfx.rlc.clear_state_gpu_addr = gpu_addr;
> >> +
> >> + amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj);
> >> +
> >> + return r;
> >> +}
> >> +
> >> +static void gfx_v9_0_csb_vram_unpin(struct amdgpu_device *adev) {
> >> + int r;
> >> +
> >> + if (!adev->gfx.rlc.clear_state_obj)
> >> + return;
> >> +
> >> + r = amdgpu_bo_reserve(adev->gfx.rlc.clear_state_obj, true);
> >> + if (likely(r == 0)) {
> >> + amdgpu_bo_unpin(adev->gfx.rlc.clear_state_obj);
> >> + amdgpu_bo_unreserve(adev->gfx.rlc.clear_state_obj);
> >> + }
> >> +}
> >> +
> >> static void gfx_v9_0_mec_fini(struct amdgpu_device *adev) {
> >> amdgpu_bo_free_kernel(&adev->gfx.mec.hpd_eop_obj, NULL, NULL);
> >> @@ -3116,6 +3150,10 @@ static int gfx_v9_0_hw_init(void *handle)
> >>
> >> gfx_v9_0_gpu_init(adev);
> >>
> >> + r = gfx_v9_0_csb_vram_pin(adev);
> >> + if (r)
> >> + return r;
> >> +
> >> r = gfx_v9_0_rlc_resume(adev);
> >> if (r)
> >> return r;
> >> @@ -3224,6 +3262,8 @@ static int gfx_v9_0_hw_fini(void *handle)
> >> gfx_v9_0_cp_enable(adev, false);
> >> gfx_v9_0_rlc_stop(adev);
> >>
> >> + gfx_v9_0_csb_vram_unpin(adev);
> >> +
> >> return 0;
> >> }
> >>
> >> --
> >> 2.18.0
> >>
> >> _______________________________________________
> >> amd-gfx mailing list
> >> amd-gfx at lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> > _______________________________________________
> > amd-gfx mailing list
> > amd-gfx at lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/amd-gfx
More information about the amd-gfx
mailing list