[RFC PATCH 2/3] drm/amdgpu: change hw sched list on ctx priority override

Christian König christian.koenig at amd.com
Thu Feb 27 11:35:37 UTC 2020


Am 27.02.20 um 11:26 schrieb Nirmoy:
>
> On 2/27/20 11:08 AM, Christian König wrote:
>>
>>>               scheds = adev->sdma.sdma_sched;
>>> @@ -502,6 +507,24 @@ struct dma_fence *amdgpu_ctx_get_fence(struct 
>>> amdgpu_ctx *ctx,
>>>       return fence;
>>>   }
>>>   +static void amdgpu_ctx_hw_priority_override(struct amdgpu_ctx *ctx,
>>> +                        const u32 hw_ip,
>>> +                        enum drm_sched_priority priority)
>>> +{
>>> +    int i;
>>> +
>>> +    for (i = 0; i < amdgpu_ctx_num_entities[hw_ip]; ++i) {
>>> +        if (!ctx->entities[hw_ip][i])
>>> +            continue;
>>> +
>>> +        /* TODO what happens with prev scheduled jobs */
>>
>> If we do it right, that should be unproblematic.
>>
>> The entity changes the rq/scheduler it submits stuff to only when it 
>> is idle, e.g. no jobs on the hardware nor software queue.
>>
>> So changing the priority when there is still work should be ok 
>> because it won't take effect until the entity is idle.
> Thanks clarifying that.
>>
>> Can of course be that userspace then wonders why the new priority 
>> doesn't take effect. But when you shoot yourself into the foot it is 
>> supposed to hurt, doesn't it?
>  :D
>>
>>> + drm_sched_entity_destroy(&ctx->entities[hw_ip][i]->entity);
>>> +        amdgpu_ctx_fini_entity(ctx->entities[hw_ip][i]);
>>> +
>>> +        amdgpu_ctx_init_entity(ctx, AMDGPU_HW_IP_COMPUTE, i);
>>
>> Well, that is most likely NOT the right way of doing it :) Destroying 
>> the entity with fini and reinit might cause quite a bunch of problems.
>>
>> Could be that this works as well, but I would rather just assign 
>> sched_list and num_sched_list.
>
> How about doing that with a new function like 
> drm_sched_entity_modify_sched() ?

Yes, sounds like the sanest thing to as well.

Christian.


More information about the amd-gfx mailing list