[Freedreno] [PATCH 2/2] drm/sched: Reverse run-queue priority enumeration
Christian König
christian.koenig at amd.com
Fri Nov 24 09:38:09 UTC 2023
Am 24.11.23 um 09:22 schrieb Luben Tuikov:
> On 2023-11-24 03:04, Christian König wrote:
>> Am 24.11.23 um 06:27 schrieb Luben Tuikov:
>>> Reverse run-queue priority enumeration such that the higest priority is now 0,
>>> and for each consecutive integer the prioirty diminishes.
>>>
>>> Run-queues correspond to priorities. To an external observer a scheduler
>>> created with a single run-queue, and another created with
>>> DRM_SCHED_PRIORITY_COUNT number of run-queues, should always schedule
>>> sched->sched_rq[0] with the same "priority", as that index run-queue exists in
>>> both schedulers, i.e. a scheduler with one run-queue or many. This patch makes
>>> it so.
>>>
>>> In other words, the "priority" of sched->sched_rq[n], n >= 0, is the same for
>>> any scheduler created with any allowable number of run-queues (priorities), 0
>>> to DRM_SCHED_PRIORITY_COUNT.
>>>
>>> Cc: Rob Clark <robdclark at gmail.com>
>>> Cc: Abhinav Kumar <quic_abhinavk at quicinc.com>
>>> Cc: Dmitry Baryshkov <dmitry.baryshkov at linaro.org>
>>> Cc: Danilo Krummrich <dakr at redhat.com>
>>> Cc: Alex Deucher <alexander.deucher at amd.com>
>>> Cc: Christian König <christian.koenig at amd.com>
>>> Cc: linux-arm-msm at vger.kernel.org
>>> Cc: freedreno at lists.freedesktop.org
>>> Cc: dri-devel at lists.freedesktop.org
>>> Signed-off-by: Luben Tuikov <ltuikov89 at gmail.com>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 2 +-
>>> drivers/gpu/drm/msm/msm_gpu.h | 2 +-
>>> drivers/gpu/drm/scheduler/sched_entity.c | 7 ++++---
>>> drivers/gpu/drm/scheduler/sched_main.c | 15 +++++++--------
>>> include/drm/gpu_scheduler.h | 6 +++---
>>> 5 files changed, 16 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> index 1a25931607c514..71a5cf37b472d4 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
>>> @@ -325,7 +325,7 @@ void amdgpu_job_stop_all_jobs_on_sched(struct drm_gpu_scheduler *sched)
>>> int i;
>>>
>>> /* Signal all jobs not yet scheduled */
>>> - for (i = sched->num_rqs - 1; i >= DRM_SCHED_PRIORITY_LOW; i--) {
>>> + for (i = DRM_SCHED_PRIORITY_KERNEL; i < sched->num_rqs; i++) {
>>> struct drm_sched_rq *rq = sched->sched_rq[i];
>>> spin_lock(&rq->lock);
>>> list_for_each_entry(s_entity, &rq->entities, list) {
>>> diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h
>>> index eb0c97433e5f8a..2bfcb222e35338 100644
>>> --- a/drivers/gpu/drm/msm/msm_gpu.h
>>> +++ b/drivers/gpu/drm/msm/msm_gpu.h
>>> @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr {
>>> * DRM_SCHED_PRIORITY_KERNEL priority level is treated specially in some
>>> * cases, so we don't use it (no need for kernel generated jobs).
>>> */
>>> -#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_HIGH - DRM_SCHED_PRIORITY_LOW)
>>> +#define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH)
>>>
>>> /**
>>> * struct msm_file_private - per-drm_file context
>>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
>>> index cb7445be3cbb4e..6e2b02e45e3a32 100644
>>> --- a/drivers/gpu/drm/scheduler/sched_entity.c
>>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
>>> @@ -81,14 +81,15 @@ int drm_sched_entity_init(struct drm_sched_entity *entity,
>>> */
>>> pr_warn("%s: called with uninitialized scheduler\n", __func__);
>>> } else if (num_sched_list) {
>>> - /* The "priority" of an entity cannot exceed the number
>>> - * of run-queues of a scheduler.
>>> + /* The "priority" of an entity cannot exceed the number of
>>> + * run-queues of a scheduler. Choose the lowest priority
>>> + * available.
>>> */
>>> if (entity->priority >= sched_list[0]->num_rqs) {
>>> drm_err(sched_list[0], "entity with out-of-bounds priority:%u num_rqs:%u\n",
>>> entity->priority, sched_list[0]->num_rqs);
>>> entity->priority = max_t(s32, (s32) sched_list[0]->num_rqs - 1,
>>> - (s32) DRM_SCHED_PRIORITY_LOW);
>>> + (s32) DRM_SCHED_PRIORITY_KERNEL);
>> That seems to be a no-op. You basically say max_T(.., num_rqs - 1, 0),
>> this will always be num_rqs - 1
> This protects against num_rqs being equal to 0, in which case we select KERNEL (0).
Ah! That's also why convert it to signed! I was already wondering why
you do this.
>
> This comes from "[PATCH] drm/sched: Fix bounds limiting when given a malformed entity"
> which I sent yesterday (Message-ID: <20231123122422.167832-2-ltuikov89 at gmail.com>).
I can't find that one in my inbox anywhere, but was able to find it in
patchwork.
> Could you R-B that patch too?
I would add a comment cause the intention of max_t(s32 is really not
obvious here.
With that done feel free to add my rb to both patches.
Regards,
Christian.
>
>> Apart from that looks good to me.
> Okay, could you R-B this patch then.
More information about the Freedreno
mailing list