[PATCH v6 1/1] drm/amdgpu: set compute queue priority at mqd_init
Nirmoy
nirmodas at amd.com
Tue Mar 3 14:29:36 UTC 2020
On 3/3/20 3:03 PM, Christian König wrote:
> Am 03.03.20 um 13:50 schrieb Nirmoy Das:
>> [SNIP]
>> struct amdgpu_mec {
>> struct amdgpu_bo *hpd_eop_obj;
>> u64 hpd_eop_gpu_addr;
>> @@ -280,8 +290,9 @@ struct amdgpu_gfx {
>> uint32_t num_gfx_sched;
>> unsigned num_gfx_rings;
>> struct amdgpu_ring compute_ring[AMDGPU_MAX_COMPUTE_RINGS];
>> + struct drm_gpu_scheduler
>> **compute_prio_sched[AMDGPU_GFX_PIPE_PRIO_MAX];
>> struct drm_gpu_scheduler *compute_sched[AMDGPU_MAX_COMPUTE_RINGS];
>> - uint32_t num_compute_sched;
>> + uint32_t num_compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX];
>> unsigned num_compute_rings;
>> struct amdgpu_irq_src eop_irq;
>> struct amdgpu_irq_src priv_reg_irq;
>
> Well my question is why we we need compute_prio_sched here?
This one is so that I can leverage single sched array
compute_sched[AMDGPU_MAX_COMPUTE_RINGS]
to store both priority sched list instead of
struct drm_gpu_scheduler
*compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX][AMDGPU_MAX_COMPUTE_RINGS];
I guess this make ctx code unnecessarily complex.
>
> Can't we just design that as:
> struct drm_gpu_scheduler
> *compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX][AMDGPU_MAX_HI_COMPUTE_RINGS];
> uint32_t num_compute_sched[AMDGPU_GFX_PIPE_PRIO_MAX];
What should be the value of AMDGPU_MAX_HI_COMPUTE_RINGS ?
>
> I mean the drm_gpu_scheduler * array doesn't needs to be constructed
> by the context code in the first place.
Do you mean amdgpu_ctx_init_sched() should belong to somewhere else may
be in amdgpu_ring.c ?
>
> Or am I missing something?
>
> Regards,
> Christian.
More information about the amd-gfx
mailing list