[PATCH 4/4] drm/scheduler: do not keep a copy of sched list

Christian König christian.koenig at amd.com
Mon Dec 9 12:20:10 UTC 2019


Yes, you need to do this for the SDMA as well but in general that looks 
like the idea I had in mind as well.

I would do it like this:

1. Change the special case when you only get one scheduler for an entity 
to drop the pointer to the scheduler list.
     This way we always use the same scheduler for the entity and can 
pass in the array on the stack.

2. Change all callers which use more than one scheduler in the list to 
pass in pointers which are not allocated on the stack.
     This obviously also means that we build the list of schedulers for 
each type only once during device init and not for each context init.

3. Make the scheduler list const and drop the kcalloc()/kfree() from the 
entity code.

Regards,
Christian.

Am 08.12.19 um 20:57 schrieb Nirmoy:
>
> On 12/6/19 8:41 PM, Christian König wrote:
>> Am 06.12.19 um 18:33 schrieb Nirmoy Das:
>>> entity should not keep copy and maintain sched list for
>>> itself.
>>
>> That is a good step, but we need to take this further.
>
> How about  something like ?
>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h 
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> index 0ae0a2715b0d..a71ee084b47a 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.h
> @@ -269,8 +269,10 @@ struct amdgpu_gfx {
>         bool                            me_fw_write_wait;
>         bool                            cp_fw_write_wait;
>         struct amdgpu_ring gfx_ring[AMDGPU_MAX_GFX_RINGS];
> +       struct drm_gpu_scheduler *gfx_sched_list[AMDGPU_MAX_GFX_RINGS];
>         unsigned                        num_gfx_rings;
>         struct amdgpu_ring compute_ring[AMDGPU_MAX_COMPUTE_RINGS];
> +       struct drm_gpu_scheduler 
> *compute_sched_list[AMDGPU_MAX_COMPUTE_RINGS];
>         unsigned                        num_compute_rings;
>         struct amdgpu_irq_src           eop_irq;
>         struct amdgpu_irq_src           priv_reg_irq;
>
>
> Regards,
>
> Nirmoy
>



More information about the amd-gfx mailing list