[PATCH 2/3] drm/amdgpu: don't use pipe1 of gfx10

Liu, Monk Monk.Liu at amd.com
Mon Mar 2 09:34:39 UTC 2020


Hi Hawking

I didn't see tianci's patch merged to drm-next, is it passed our review yet ? 

Please ignore my patch then 
_____________________________________
Monk Liu|GPU Virtualization Team |AMD


-----Original Message-----
From: Zhang, Hawking <Hawking.Zhang at amd.com> 
Sent: Monday, March 2, 2020 5:30 PM
To: Liu, Monk <Monk.Liu at amd.com>; amd-gfx at lists.freedesktop.org
Cc: Liu, Monk <Monk.Liu at amd.com>
Subject: RE: [PATCH 2/3] drm/amdgpu: don't use pipe1 of gfx10

[AMD Official Use Only - Internal Distribution Only]

This has already done by Tianchi.

Regards,
Hawking

-----Original Message-----
From: amd-gfx <amd-gfx-bounces at lists.freedesktop.org> On Behalf Of Monk Liu
Sent: Monday, March 2, 2020 17:22
To: amd-gfx at lists.freedesktop.org
Cc: Liu, Monk <Monk.Liu at amd.com>
Subject: [PATCH 2/3] drm/amdgpu: don't use pipe1 of gfx10

what:
we found sometimes IDLE fail after vf guest finished IB test on GFX ring1 (pipe1)

why:
below is what CP team stated (Manu):
GFX Pipe 1 is there in HW, but as part of optimization all driver decided not to use pipe 1 at all, otherwise driver has to sacrifice context so all 7 context will not be able for GFX pipe 0. That’s why I skip setting of state for gfx pipe 1 as decided by all driver team

fix:
since CP team won't help us to debug any issues that related with gfx pipe1, so based on above reason, let's skip gfx ring 1 (pipe1) even for both bare-metal and SRIOV

Signed-off-by: Monk Liu <Monk.Liu at amd.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
index 0555989..afae4cc 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
@@ -1308,7 +1308,7 @@ static int gfx_v10_0_sw_init(void *handle)
 	case CHIP_NAVI14:
 	case CHIP_NAVI12:
 		adev->gfx.me.num_me = 1;
-		adev->gfx.me.num_pipe_per_me = 2;
+		adev->gfx.me.num_pipe_per_me = 1;
 		adev->gfx.me.num_queue_per_pipe = 1;
 		adev->gfx.mec.num_mec = 2;
 		adev->gfx.mec.num_pipe_per_mec = 4;
@@ -2713,18 +2713,21 @@ static int gfx_v10_0_cp_gfx_start(struct amdgpu_device *adev)
 
 	amdgpu_ring_commit(ring);
 
-	/* submit cs packet to copy state 0 to next available state */
-	ring = &adev->gfx.gfx_ring[1];
-	r = amdgpu_ring_alloc(ring, 2);
-	if (r) {
-		DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
-		return r;
-	}
+	if (adev->gfx.me.num_pipe_per_me == 2) {
+		/* submit cs packet to copy state 0 to next available state */
+		ring = &adev->gfx.gfx_ring[1];
 
-	amdgpu_ring_write(ring, PACKET3(PACKET3_CLEAR_STATE, 0));
-	amdgpu_ring_write(ring, 0);
+		r = amdgpu_ring_alloc(ring, 2);
+		if (r) {
+			DRM_ERROR("amdgpu: cp failed to lock ring (%d).\n", r);
+			return r;
+		}
 
-	amdgpu_ring_commit(ring);
+		amdgpu_ring_write(ring, PACKET3(PACKET3_CLEAR_STATE, 0));
+		amdgpu_ring_write(ring, 0);
+
+		amdgpu_ring_commit(ring);
+	}
 
 	return 0;
 }
@@ -2822,6 +2825,9 @@ static int gfx_v10_0_cp_gfx_resume(struct amdgpu_device *adev)
 	mutex_unlock(&adev->srbm_mutex);
 
 	/* Init gfx ring 1 for pipe 1 */
+	if (adev->gfx.me.num_pipe_per_me == 1)
+		goto do_start;
+
 	mutex_lock(&adev->srbm_mutex);
 	gfx_v10_0_cp_gfx_switch_pipe(adev, PIPE_ID1);
 	ring = &adev->gfx.gfx_ring[1];
@@ -2860,6 +2866,7 @@ static int gfx_v10_0_cp_gfx_resume(struct amdgpu_device *adev)
 	gfx_v10_0_cp_gfx_switch_pipe(adev, PIPE_ID0);
 	mutex_unlock(&adev->srbm_mutex);
 
+do_start:
 	/* start the ring */
 	gfx_v10_0_cp_gfx_start(adev);
 
--
2.7.4

_______________________________________________
amd-gfx mailing list
amd-gfx at lists.freedesktop.org
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Chawking.zhang%40amd.com%7C7e7dfb16eb5344c43bcc08d7be8b32e4%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637187377390959192&sdata=2Gb5apKZanCfvgbclKLO%2BAAbasxNamUgw0j567HI%2BBs%3D&reserved=0


More information about the amd-gfx mailing list