[RFC PATCH 2/2] drm/amdgpu: enable gfx wave limiting for high priority compute jobs
Nirmoy
nirmodas at amd.com
Thu Jan 28 16:54:27 UTC 2021
On 1/28/21 5:14 PM, Christian König wrote:
> Am 28.01.21 um 17:01 schrieb Nirmoy:
>>
>> On 1/28/21 4:25 PM, Christian König wrote:
>>> Am 28.01.21 um 16:21 schrieb Nirmoy:
>>>>
>>>> On 1/28/21 3:49 PM, Christian König wrote:
>>>>> Am 28.01.21 um 15:35 schrieb Nirmoy Das:
>>>>>> Enable gfx wave limiting for gfx jobs before pushing high priority
>>>>>> compute jobs so that high priority compute jobs gets more resources
>>>>>> to finish early.
>>>>>
>>>>> The problem here is what happens if you have multiple high
>>>>> priority jobs running at the same time?
>>>>
>>>>
>>>> AFAIU, in that case quantum duration will come into effect. The
>>>> queue arbiter will switch
>>>>
>>>> to next high priority active queue once quantum duration expires.
>>>> This should be similar to what
>>>>
>>>> we already have, multiple normal priority jobs sharing GPU
>>>> resources based on CP_HQD_QUANTUM.
>>>>
>>>> QUEUE_DURATION register value.
>>>
>>> Yeah, but when the first high priority job completes it will reset
>>> mmSPI_WCL_PIPE_PERCENT_GFX back to the default value.
>>>
>>> Have you considered that?
>>
>>
>> Yes I need bit of clarity here. Isn't one frame(...pm4(wave_limit),
>> pm4(IBs), pm4(restore_wave_limit), ..)
>>
>> executes together as one unit? If that is the case then the next high
>> prio compute job will set the wave limit
>>
>> again and will be applied for its dispatch call.
>
> Yeah, that is correct. But the problem is somewhere else.
>
>>
>>
>> I guess that is not the case because you asked this question. Do you
>> think we should have only one high priority
>>
>> queue then?
>
> Yes exactly that. IIRC we currently have 4 low priority and 4 high
> priority queues.
>
> The problem is those 4 high priority queues. If we only use 1 then we
> won't run into this as far as I can see.
>
I see. I will add another patch to limit high prio queues to one.
Regards,
Nirmoy
> Regards,
> Christian.
>
>>
>>
>> I tried to test it by running two instances of same vulkan test
>> application. I can't trace
>>
>> two applications together using RGP. From the trace of one
>> application(along with other running together),
>>
>> I didn't see any throttling down of high priority compute job(yellow
>> bars).
>>
>>
>> Let me know what do you think. I will work with Alan to change the
>> test application so that we can verify this
>>
>> using multiple high priority context.
>>
>>
>> Regards,
>>
>> Nirmoy
>>
>>>
>>> Thanks,
>>> Christian.
>>>
>>>>
>>>>
>>>> Regards,
>>>>
>>>> Nirmoy
>>>>
>>>>
>>>>>
>>>>> Christian
>>>>>
>>>>>>
>>>>>> Signed-off-by: Nirmoy Das <nirmoy.das at amd.com>
>>>>>> ---
>>>>>> drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 9 +++++++++
>>>>>> 1 file changed, 9 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>>>>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>>>>> index 024d0a563a65..ee48989dfb4c 100644
>>>>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>>>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
>>>>>> @@ -195,6 +195,10 @@ int amdgpu_ib_schedule(struct amdgpu_ring
>>>>>> *ring, unsigned num_ibs,
>>>>>> if ((ib->flags & AMDGPU_IB_FLAG_EMIT_MEM_SYNC) &&
>>>>>> ring->funcs->emit_mem_sync)
>>>>>> ring->funcs->emit_mem_sync(ring);
>>>>>> + if (ring->funcs->emit_wave_limit && job &&
>>>>>> + job->base.s_priority >= DRM_SCHED_PRIORITY_HIGH)
>>>>>> + ring->funcs->emit_wave_limit(ring, true);
>>>>>> +
>>>>>> if (ring->funcs->insert_start)
>>>>>> ring->funcs->insert_start(ring);
>>>>>> @@ -295,6 +299,11 @@ int amdgpu_ib_schedule(struct amdgpu_ring
>>>>>> *ring, unsigned num_ibs,
>>>>>> ring->current_ctx = fence_ctx;
>>>>>> if (vm && ring->funcs->emit_switch_buffer)
>>>>>> amdgpu_ring_emit_switch_buffer(ring);
>>>>>> +
>>>>>> + if (ring->funcs->emit_wave_limit && job &&
>>>>>> + job->base.s_priority >= DRM_SCHED_PRIORITY_HIGH)
>>>>>> + ring->funcs->emit_wave_limit(ring, false);
>>>>>> +
>>>>>> amdgpu_ring_commit(ring);
>>>>>> return 0;
>>>>>> }
>>>>>
>>>>> _______________________________________________
>>>>> amd-gfx mailing list
>>>>> amd-gfx at lists.freedesktop.org
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cnirmoy.das%40amd.com%7C67e903357ee247f9ceb008d8c3a0efdf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637474443287007930%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EyouSUvaKjQIIWeKDilVra73iL1eb0rpnaUCDAIDvXA%3D&reserved=0
>>>>>
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-gfx at lists.freedesktop.org
>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=04%7C01%7Cnirmoy.das%40amd.com%7C67e903357ee247f9ceb008d8c3a0efdf%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637474443287007930%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=EyouSUvaKjQIIWeKDilVra73iL1eb0rpnaUCDAIDvXA%3D&reserved=0
>>>>
>>>
>
More information about the amd-gfx
mailing list