[PATCH v5 08/16] drm/sched: Consolidate entity run queue management
Tvrtko Ursulin
tvrtko.ursulin at igalia.com
Fri Jul 4 15:32:47 UTC 2025
On 04/07/2025 14:51, Maíra Canal wrote:
> Hi Tvrtko,
>
> On 23/06/25 09:27, Tvrtko Ursulin wrote:
>> Move the code dealing with entities entering and exiting run queues to
>> helpers to logically separate it from jobs entering and exiting entities.
>>
>> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin at igalia.com>
>> Cc: Christian König <christian.koenig at amd.com>
>> Cc: Danilo Krummrich <dakr at kernel.org>
>> Cc: Matthew Brost <matthew.brost at intel.com>
>> Cc: Philipp Stanner <phasta at kernel.org>
>> ---
>> drivers/gpu/drm/scheduler/sched_entity.c | 60 ++-------------
>> drivers/gpu/drm/scheduler/sched_internal.h | 8 +-
>> drivers/gpu/drm/scheduler/sched_main.c | 87 +++++++++++++++++++---
>> 3 files changed, 83 insertions(+), 72 deletions(-)
>>
>
> [...]
>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/
>> scheduler/sched_main.c
>> index 2c85be201605..bfec2e35a2f6 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -145,15 +145,18 @@ static __always_inline bool
>> drm_sched_entity_compare_before(struct rb_node *a,
>> static void drm_sched_rq_remove_fifo_locked(struct drm_sched_entity
>> *entity,
>> struct drm_sched_rq *rq)
>> {
>> + lockdep_assert_held(&entity->lock);
>> + lockdep_assert_held(&rq->lock);
>> +
>
> Nit: I'm not sure if this change belongs to this patch. I felt a bit out
> of context to me. Also, the same assert exists in
> `drm_sched_rq_update_fifo_locked()` (maybe remove there?).
You are right - another bad rebase after moving patches around in the
series. Fixed locally.
>
>> if (!RB_EMPTY_NODE(&entity->rb_tree_node)) {
>> rb_erase_cached(&entity->rb_tree_node, &rq->rb_tree_root);
>> RB_CLEAR_NODE(&entity->rb_tree_node);
>> }
>> }
>> -void drm_sched_rq_update_fifo_locked(struct drm_sched_entity *entity,
>> - struct drm_sched_rq *rq,
>> - ktime_t ts)
>> +static void drm_sched_rq_update_fifo_locked(struct drm_sched_entity
>> *entity,
>> + struct drm_sched_rq *rq,
>> + ktime_t ts)
>> {
>> /*
>> * Both locks need to be grabbed, one to protect from entity->rq
>> change
>> @@ -188,25 +191,58 @@ static void drm_sched_rq_init(struct
>> drm_gpu_scheduler *sched,
>> rq->sched = sched;
>> }
>
> [...]
>
>
> Missing kerneldoc.
You got me there for a while, I always look up from the review comment.
:) Fixed locally, thanks!
Regards,
Tvrtko
>
>> +void drm_sched_rq_pop_entity(struct drm_sched_entity *entity)
>> +{
>> + struct drm_sched_job *next_job;
>> + struct drm_sched_rq *rq;
>> + ktime_t ts;
>> +
>> + /*
>> + * Update the entity's location in the min heap according to
>> + * the timestamp of the next job, if any.
>> + */
>> + next_job = drm_sched_entity_queue_peek(entity);
>> + if (!next_job)
>> + return;
>> +
>> + if (drm_sched_policy == DRM_SCHED_POLICY_FIFO)
>> + ts = next_job->submit_ts;
>> + else
>> + ts = drm_sched_rq_get_rr_deadline(rq);
>
> Same as previous patch :)
>
> Best Regards,
> - Maíra
>
>> +
>> + spin_lock(&entity->lock);
>> + rq = entity->rq;
>> + spin_lock(&rq->lock);
>> + drm_sched_rq_update_fifo_locked(entity, rq, ts);
>> + spin_unlock(&rq->lock);
>> + spin_unlock(&entity->lock);
>> +}
>> +
>> /**
>> * drm_sched_rq_select_entity - Select an entity which provides a
>> job to run
>> *
>
More information about the dri-devel
mailing list