[RFC PATCH 2/3] drm/amdgpu: change hw sched list on ctx priority override

Nirmoy nirmodas at amd.com
Thu Feb 27 21:17:08 UTC 2020


On 2/27/20 10:02 PM, Alex Deucher wrote:
> On Thu, Feb 27, 2020 at 3:28 PM Nirmoy <nirmodas at amd.com> wrote:
>>
>> On 2/27/20 3:35 PM, Alex Deucher wrote:
>>> We shouldn't be changing this at runtime.  We need to set up the queue
>>> priority at init time and then schedule to the appropriate quueue at
>>> runtime.  We set the pipe/queue priority in the mqd (memory queue
>>> descriptor).  When we init the rings we configure the mqds in memory
>>> and then tell the CP to configure the rings.  The CP then fetches the
>>> config from memory (the mqd) and pushes the configuration to the hqd
>>> (hardware queue descriptor).  Currrently we just statically set up the
>>> queues at driver init time, but the hw has the capability to schedule
>>> queues dynamically at runtime.  E.g., we could have a per process mqd
>>> for each queue and then tell the CP to schedule the mqd on the
>>> hardware at runtime.  For now, I think we should just set up some
>>> static pools of rings (e.g., normal and high priority or low, normal,
>>> and high priorities).  Note that you probably want to keep the high
>>> priority queues on a different pipe from the low/normal priority
>>> queues.  Depending on the asic there are 1 or 2 MECs (compute micro
>>> engines) and each MEC supports 4 pipes.  Each pipe can handle up to 8
>>> queues.
>> After some debugging I realized we have amdgpu_gfx_compute_queue_acquire()
>>
>> which forces amdgpu to only use queue 0,1 of every pipe form MEC 0 even
>> if we
>>
>> have more than 1 MEC.
>>
> IIRC, that is to spread the queues across as many pipes as possible.
okay
>
>> Does it make sense to have two high priority queue on the same pipe ?
> Good question.  Not sure what the best option is for splitting up the
> queues.  Maybe one set of queues (low and high) per pipe?

I think a low and high priority queue per pipe should work well AFAIU.


Nirmoy

>
> Alex
>
>> Regards,
>>
>> Nirmoy
>>
>>
>>> Alex
>>>


More information about the amd-gfx mailing list