[RFC PATCH 2/3] drm/amdgpu: change hw sched list on ctx priority override
Nirmoy
nirmodas at amd.com
Thu Feb 27 20:31:57 UTC 2020
On 2/27/20 3:35 PM, Alex Deucher wrote:
> We shouldn't be changing this at runtime. We need to set up the queue
> priority at init time and then schedule to the appropriate quueue at
> runtime. We set the pipe/queue priority in the mqd (memory queue
> descriptor). When we init the rings we configure the mqds in memory
> and then tell the CP to configure the rings. The CP then fetches the
> config from memory (the mqd) and pushes the configuration to the hqd
> (hardware queue descriptor). Currrently we just statically set up the
> queues at driver init time, but the hw has the capability to schedule
> queues dynamically at runtime. E.g., we could have a per process mqd
> for each queue and then tell the CP to schedule the mqd on the
> hardware at runtime. For now, I think we should just set up some
> static pools of rings (e.g., normal and high priority or low, normal,
> and high priorities). Note that you probably want to keep the high
> priority queues on a different pipe from the low/normal priority
> queues. Depending on the asic there are 1 or 2 MECs (compute micro
> engines) and each MEC supports 4 pipes. Each pipe can handle up to 8
> queues.
After some debugging I realized we have amdgpu_gfx_compute_queue_acquire()
which forces amdgpu to only use queue 0,1 of every pipe form MEC 0 even
if we
have more than 1 MEC.
Does it make sense to have two high priority queue on the same pipe ?
Regards,
Nirmoy
> Alex
>
>>
More information about the amd-gfx
mailing list