[PATCH 16/26] drm/amdgpu: new queue policy, take first 2 queues of each pipe

Andres Rodriguez andresx7 at gmail.com
Wed Apr 12 19:06:13 UTC 2017



On 2017-04-11 06:30 PM, Felix Kuehling wrote:
> What about GFX9?

I don't have a gfx9 card to test with yet.

When changing the queue policy for gfx7/8 I found a lot of places in the 
code that had the assumption that amdgpu would only be using the same 
pipe. I'm pretty sure there is probably something on the gfx9 code that 
I've missed and will blow up with the queue policy change. So I don't 
really want to send out this change without testing it.

If someone wants to create a similar patch for gfx9 and test it that 
should be simple.

Or... if you could send me a vega board... ;)

Regards,
Andres

>
> See one more comment inline [FK].
>
>
> On 17-04-06 02:21 AM, Andres Rodriguez wrote:
>> Instead of taking the first pipe and giving the rest to kfd, take the
>> first 2 queues of each pipe.
>>
>> Effectively, amdgpu and amdkfd own the same number of queues. But
>> because the queues are spread over multiple pipes the hardware will be
>> able to better handle concurrent compute workloads.
>>
>> amdgpu goes from 1 pipe to 4 pipes, i.e. from 1 compute threads to 4
>> amdkfd goes from 3 pipe to 4 pipes, i.e. from 3 compute threads to 4
>>
>> Reviewed-by: Edward O'Callaghan <funfunctor at folklore1984.net>
>> Acked-by: Christian König <christian.koenig at amd.com>
>> Signed-off-by: Andres Rodriguez <andresx7 at gmail.com>
>> ---
>>  drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 2 +-
>>  drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 2 +-
>>  2 files changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
>> index 5a8ebae..c0cfcb9 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
>> @@ -2833,21 +2833,21 @@ static void gfx_v7_0_compute_queue_acquire(struct amdgpu_device *adev)
>>  		pipe = (i / adev->gfx.mec.num_queue_per_pipe)
>>  			% adev->gfx.mec.num_pipe_per_mec;
>>  		mec = (i / adev->gfx.mec.num_queue_per_pipe)
>>  			/ adev->gfx.mec.num_pipe_per_mec;
>>
>>  		/* we've run out of HW */
>>  		if (mec > adev->gfx.mec.num_mec)
>>  			break;
>>
>>  		/* policy: amdgpu owns all queues in the first pipe */
>
> [FK] The comments needs to be updated (same for GFX8 below).
>
>> -		if (mec == 0 && pipe == 0)
>> +		if (mec == 0 && queue < 2)
>>  			set_bit(i, adev->gfx.mec.queue_bitmap);
>>  	}
>>
>>  	/* update the number of active compute rings */
>>  	adev->gfx.num_compute_rings =
>>  		bitmap_weight(adev->gfx.mec.queue_bitmap, AMDGPU_MAX_COMPUTE_QUEUES);
>>
>>  	/* If you hit this case and edited the policy, you probably just
>>  	 * need to increase AMDGPU_MAX_COMPUTE_RINGS */
>>  	WARN_ON(adev->gfx.num_compute_rings > AMDGPU_MAX_COMPUTE_RINGS);
>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
>> index 70119c5..f0c1a3f 100644
>> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
>> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
>> @@ -1453,21 +1453,21 @@ static void gfx_v8_0_compute_queue_acquire(struct amdgpu_device *adev)
>>  		pipe = (i / adev->gfx.mec.num_queue_per_pipe)
>>  			% adev->gfx.mec.num_pipe_per_mec;
>>  		mec = (i / adev->gfx.mec.num_queue_per_pipe)
>>  			/ adev->gfx.mec.num_pipe_per_mec;
>>
>>  		/* we've run out of HW */
>>  		if (mec > adev->gfx.mec.num_mec)
>>  			break;
>>
>>  		/* policy: amdgpu owns all queues in the first pipe */
>> -		if (mec == 0 && pipe == 0)
>> +		if (mec == 0 && queue < 2)
>>  			set_bit(i, adev->gfx.mec.queue_bitmap);
>>  	}
>>
>>  	/* update the number of active compute rings */
>>  	adev->gfx.num_compute_rings =
>>  		bitmap_weight(adev->gfx.mec.queue_bitmap, AMDGPU_MAX_COMPUTE_QUEUES);
>>
>>  	/* If you hit this case and edited the policy, you probably just
>>  	 * need to increase AMDGPU_MAX_COMPUTE_RINGS */
>>  	if (WARN_ON(adev->gfx.num_compute_rings > AMDGPU_MAX_COMPUTE_RINGS))
>


More information about the amd-gfx mailing list