[PATCH 14/22] drm/amdgpu: new queue policy, take first 2 queues of each pipe

Andres Rodriguez andresx7 at gmail.com
Tue Mar 7 23:51:06 UTC 2017


Instead of taking the first pipe and giving the rest to kfd, take the
first 2 queues of each pipe.

Effectively, amdgpu and amdkfd own the same number of queues. But
because the queues are spread over multiple pipes the hardware will be
able to better handle concurrent compute workloads.

amdgpu goes from 1 pipe to 4 pipes, i.e. from 1 compute threads to 4
amdkfd goes from 3 pipe to 4 pipes, i.e. from 3 compute threads to 4

Reviewed-by: Edward O'Callaghan <funfunctor at folklore1984.net>
Acked-by: Christian K├Ânig <christian.koenig at amd.com>
Signed-off-by: Andres Rodriguez <andresx7 at gmail.com>
---
 drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 2 +-
 drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
index 3ca5519..b0b0c89 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
@@ -2811,21 +2811,21 @@ static void gfx_v7_0_compute_queue_acquire(struct amdgpu_device *adev)
 		pipe = (i / adev->gfx.mec.num_queue_per_pipe)
 			% adev->gfx.mec.num_pipe_per_mec;
 		mec = (i / adev->gfx.mec.num_queue_per_pipe)
 			/ adev->gfx.mec.num_pipe_per_mec;
 
 		/* we've run out of HW */
 		if (mec > adev->gfx.mec.num_mec)
 			break;
 
 		/* policy: amdgpu owns all queues in the first pipe */
-		if (mec == 0 && pipe == 0)
+		if (mec == 0 && queue < 2)
 			set_bit(i, adev->gfx.mec.queue_bitmap);
 	}
 
 	/* update the number of active compute rings */
 	adev->gfx.num_compute_rings =
 		bitmap_weight(adev->gfx.mec.queue_bitmap, AMDGPU_MAX_QUEUES);
 
 	/* If you hit this case and edited the policy, you probably just
 	 * need to increase AMDGPU_MAX_COMPUTE_RINGS */
 	WARN_ON(adev->gfx.num_compute_rings > AMDGPU_MAX_COMPUTE_RINGS);
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index edddd86..5db5bac 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -1429,21 +1429,21 @@ static void gfx_v8_0_compute_queue_acquire(struct amdgpu_device *adev)
 		pipe = (i / adev->gfx.mec.num_queue_per_pipe)
 			% adev->gfx.mec.num_pipe_per_mec;
 		mec = (i / adev->gfx.mec.num_queue_per_pipe)
 			/ adev->gfx.mec.num_pipe_per_mec;
 
 		/* we've run out of HW */
 		if (mec > adev->gfx.mec.num_mec)
 			break;
 
 		/* policy: amdgpu owns all queues in the first pipe */
-		if (mec == 0 && pipe == 0)
+		if (mec == 0 && queue < 2)
 			set_bit(i, adev->gfx.mec.queue_bitmap);
 	}
 
 	/* update the number of active compute rings */
 	adev->gfx.num_compute_rings =
 		bitmap_weight(adev->gfx.mec.queue_bitmap, AMDGPU_MAX_QUEUES);
 
 	/* If you hit this case and edited the policy, you probably just
 	 * need to increase AMDGPU_MAX_COMPUTE_RINGS */
 	if (WARN_ON(adev->gfx.num_compute_rings > AMDGPU_MAX_COMPUTE_RINGS))
-- 
2.9.3



More information about the amd-gfx mailing list