[PATCH 1/5] drm/sched:add new priority level

Christian König christian.koenig at amd.com
Tue Aug 24 11:56:16 UTC 2021


Am 24.08.21 um 11:45 schrieb Sharma, Shashank:
> On 8/24/2021 2:25 PM, Christian König wrote:
>> Nope that are two completely different things.
>>
>> The DRM_SCHED_PRIORITY_* exposes a functionality of the software 
>> scheduler. E.g. we try to serve kernel queues first and if those are 
>> empty we use high priority etc....
>>
>> But that functionality is completely independent from the hardware 
>> priority handling. In other words you can different hardware queues 
>> with priorities as well and each of them is served by a software 
>> scheduler.
>>
>> In other words imagine the following setup: Two hardware queues, one 
>> normal and one low latency. Each hardware queue is then feed by a 
>> software scheduler with the priorities low,normal,high,kernel.
>>
>> This configuration then gives you 8 different priorities to use.
>>
>
> Thanks for the details, I was under quite a different impression, this 
> explanation helps.

The problem is that we used the SW scheduler enum for the init_priority 
and override_priority. Which is most likely a bad idea.

> I guess this also means that the HW queues are completely left to be 
> managed by the core driver (Like AMDGPU or I915 etc) as of now, and 
> the DRM framework only provides SW schedulers ?

Yes, exactly.

> Does this suggest a scope of a common framework or abstraction layer 
> for HW queues in DRM ? Most of the architectures/HW will atleast have 
> a NORMAL and a higher priority work queue, and their drivers might be 
> handling them in very similar ways.

I don't think so. IIRC we even have generalized ring buffer functions in 
the linux kernel which barely anybody uses because nearly every hw ring 
buffer is different in one way or another.

Christian.

>
> - Shashank
>
>> Regards,
>> Christian.
>>
>> Am 24.08.21 um 10:32 schrieb Sharma, Shashank:
>>> Hi Christian,
>>> I am a bit curious here.
>>>
>>> I thought it would be a good idea to add a new SW priority level, so 
>>> that any other driver can also utilize this SW infrastructure.
>>>
>>> So it could be like, if you have a HW which matches with SW priority 
>>> levels, directly map your HW queue to the SW priority level, like:
>>>
>>> DRM_SCHED_PRIORITY_VERY_HIGH: mapped to a queue in HW reserved for 
>>> real time or very high priority tasks, which can't be missed
>>>
>>> DRM_SCHED_PRIORITY_HIGH : mapped to a queue of High priority tasks, 
>>> for better experience, like encode/decode operations.
>>>
>>> DRM_SCHED_PRIORITY_NORMAL: default, mapped to a queue of tasks 
>>> without a priority context specified
>>>
>>> DRM_SCHED_PRIORITY_MIN: queue for specifically mentioned low 
>>> priority tasks
>>>
>>> Depending on the HW we are running on, we can map these SW queues to 
>>> corresponding HW queues, isn't it ?
>>>
>>> Regards
>>> Shashank
>>>
>>> On 8/24/2021 11:40 AM, Christian König wrote:
>>>> I haven't followed the previous discussion, but that looks like 
>>>> this change is based on a misunderstanding.
>>>>
>>>> Those here are the software priorities used in the scheduler, but 
>>>> what you are working on are the hardware priorities.
>>>>
>>>> That are two completely different things which we shouldn't mix up.
>>>>
>>>> Regards,
>>>> Christian.
>>>>
>>>> Am 24.08.21 um 07:55 schrieb Satyajit Sahu:
>>>>> Adding a new priority level DRM_SCHED_PRIORITY_VERY_HIGH
>>>>>
>>>>> Signed-off-by: Satyajit Sahu <satyajit.sahu at amd.com>
>>>>> ---
>>>>>   include/drm/gpu_scheduler.h | 1 +
>>>>>   1 file changed, 1 insertion(+)
>>>>>
>>>>> diff --git a/include/drm/gpu_scheduler.h 
>>>>> b/include/drm/gpu_scheduler.h
>>>>> index d18af49fd009..d0e5e234da5f 100644
>>>>> --- a/include/drm/gpu_scheduler.h
>>>>> +++ b/include/drm/gpu_scheduler.h
>>>>> @@ -40,6 +40,7 @@ enum drm_sched_priority {
>>>>>       DRM_SCHED_PRIORITY_MIN,
>>>>>       DRM_SCHED_PRIORITY_NORMAL,
>>>>>       DRM_SCHED_PRIORITY_HIGH,
>>>>> +    DRM_SCHED_PRIORITY_VERY_HIGH,
>>>>>       DRM_SCHED_PRIORITY_KERNEL,
>>>>>       DRM_SCHED_PRIORITY_COUNT,
>>>>
>>



More information about the amd-gfx mailing list