[PATCH] drm/scheduler: correct comments relate to scheduler
Philipp Stanner
pstanner at redhat.com
Mon Sep 16 08:31:15 UTC 2024
Hi,
I would call the commit "drm/scheduler: Improve documentation"
On Sun, 2024-09-15 at 15:52 +0000, Shuicheng Lin wrote:
> function drm_sched_entity_push_job doesn't have return value,
s/function/Function
It's also nice to always terminate a function's name with its
parenthesis: drm_sched_entity_push_job()
> remove the return value description for it.
> Correct several other typo errors.
>
> Signed-off-by: Shuicheng Lin <shuicheng.lin at intel.com>
> ---
> drivers/gpu/drm/scheduler/sched_entity.c | 8 +++-----
> drivers/gpu/drm/scheduler/sched_main.c | 4 ++--
> include/drm/gpu_scheduler.h | 12 ++++++------
> include/linux/dma-resv.h | 4 ++--
> 4 files changed, 13 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c
> b/drivers/gpu/drm/scheduler/sched_entity.c
> index 58c8161289fe..4d6a05fc35ca 100644
> --- a/drivers/gpu/drm/scheduler/sched_entity.c
> +++ b/drivers/gpu/drm/scheduler/sched_entity.c
> @@ -51,7 +51,7 @@
> * drm_sched_entity_set_priority(). For changing the set of
> schedulers
> * @sched_list at runtime see drm_sched_entity_modify_sched().
> *
> - * An entity is cleaned up by callind drm_sched_entity_fini(). See
> also
> + * An entity is cleaned up by calling drm_sched_entity_fini(). See
> also
> * drm_sched_entity_destroy().
> *
> * Returns 0 on success or a negative error code on failure.
> @@ -370,7 +370,7 @@ static void drm_sched_entity_clear_dep(struct
> dma_fence *f,
> }
>
> /*
> - * drm_sched_entity_clear_dep - callback to clear the entities
> dependency and
> + * drm_sched_entity_wakeup - callback to clear the entities
> dependency and
While you're at it:
s/entities dependency/entity's dependency
> * wake up scheduler
> */
> static void drm_sched_entity_wakeup(struct dma_fence *f,
> @@ -389,7 +389,7 @@ static void drm_sched_entity_wakeup(struct
> dma_fence *f,
> * @entity: scheduler entity
> * @priority: scheduler priority
> *
> - * Update the priority of runqueus used for the entity.
> + * Update the priority of runqueues used for the entity.
> */
> void drm_sched_entity_set_priority(struct drm_sched_entity *entity,
> enum drm_sched_priority priority)
> @@ -574,8 +574,6 @@ void drm_sched_entity_select_rq(struct
> drm_sched_entity *entity)
> * fence sequence number this function should be called with
> drm_sched_job_arm()
> * under common lock for the struct drm_sched_entity that was set up
> for
> * @sched_job in drm_sched_job_init().
> - *
> - * Returns 0 for success, negative error code otherwise.
> */
> void drm_sched_entity_push_job(struct drm_sched_job *sched_job)
> {
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> b/drivers/gpu/drm/scheduler/sched_main.c
> index f093616fe53c..6e8c7651bd95 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -41,7 +41,7 @@
> * 4. Entities themselves maintain a queue of jobs that will be
> scheduled on
> * the hardware.
> *
> - * The jobs in a entity are always scheduled in the order that they
> were pushed.
> + * The jobs in an entity are always scheduled in the order that they
> were pushed.
"in the order in which they were ..."?
> *
> * Note that once a job was taken from the entities queue and pushed
> to the
> * hardware, i.e. the pending queue, the entity must not be
> referenced anymore
> @@ -1340,7 +1340,7 @@ void drm_sched_fini(struct drm_gpu_scheduler
> *sched)
> list_for_each_entry(s_entity, &rq->entities, list)
> /*
> * Prevents reinsertion and marks job_queue
> as idle,
> - * it will removed from rq in
> drm_sched_entity_fini
> + * it will be removed from rq in
> drm_sched_entity_fini
"from the rq"?
s/drm_sched_entity_fini/drm_sched_entity_fini()
> * eventually
> */
> s_entity->stopped = true;
> diff --git a/include/drm/gpu_scheduler.h
> b/include/drm/gpu_scheduler.h
> index a8d19b10f9b8..9e1b12ca84b9 100644
> --- a/include/drm/gpu_scheduler.h
> +++ b/include/drm/gpu_scheduler.h
> @@ -33,11 +33,11 @@
> #define MAX_WAIT_SCHED_ENTITY_Q_EMPTY msecs_to_jiffies(1000)
>
> /**
> - * DRM_SCHED_FENCE_DONT_PIPELINE - Prefent dependency pipelining
> + * DRM_SCHED_FENCE_DONT_PIPELINE - Prevent dependency pipelining
> *
> * Setting this flag on a scheduler fence prevents pipelining of
> jobs depending
> * on this fence. In other words we always insert a full CPU round
> trip before
> - * dependen jobs are pushed to the hw queue.
> + * dependent jobs are pushed to the hw queue.
> */
> #define DRM_SCHED_FENCE_DONT_PIPELINE DMA_FENCE_FLAG_USER_BITS
>
> @@ -71,7 +71,7 @@ enum drm_sched_priority {
> DRM_SCHED_PRIORITY_COUNT
> };
>
> -/* Used to chose between FIFO and RR jobs scheduling */
> +/* Used to choose between FIFO and RR jobs scheduling */
s/jobs scheduling/job-scheduling
> extern int drm_sched_policy;
>
> #define DRM_SCHED_POLICY_RR 0
> @@ -198,7 +198,7 @@ struct drm_sched_entity {
> *
> * Points to the finished fence of the last scheduled job.
> Only written
> * by the scheduler thread, can be accessed locklessly from
> - * drm_sched_job_arm() iff the queue is empty.
> + * drm_sched_job_arm() if the queue is empty.
> */
> struct dma_fence __rcu *last_scheduled;
>
> @@ -247,7 +247,7 @@ struct drm_sched_entity {
> * @sched: the scheduler to which this rq belongs to.
> * @entities: list of the entities to be scheduled.
> * @current_entity: the entity which is to be scheduled.
> - * @rb_tree_root: root of time based priory queue of entities for
> FIFO scheduling
> + * @rb_tree_root: root of time based priority queue of entities for
> FIFO scheduling
> *
> * Run queue is a set of entities scheduling command submissions for
> * one specific ring. It implements the scheduling policy that
> selects
> @@ -321,7 +321,7 @@ struct drm_sched_fence *to_drm_sched_fence(struct
> dma_fence *f);
> * @s_fence: contains the fences for the scheduling of job.
> * @finish_cb: the callback for the finished fence.
> * @credits: the number of credits this job contributes to the
> scheduler
> - * @work: Helper to reschdeule job kill to different context.
> + * @work: Helper to reschedule job kill to different context.
> * @id: a unique id assigned to each job scheduled on the scheduler.
> * @karma: increment on every hang caused by this job. If this
> exceeds the hang
> * limit of the scheduler then the job is marked guilty and
> will not
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index 8d0e34dad446..a53a9b802108 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -105,10 +105,10 @@ enum dma_resv_usage {
> * This should be used by submissions which don't want to
> participate in
> * any implicit synchronization.
> *
> - * The most common case are preemption fences, page table
> updates, TLB
> + * The most common cases are preemption fences, page table
> updates, TLB
> * flushes as well as explicit synced user submissions.
Hm, would it make sense here to say "explicitly"?
> *
> - * Explicit synced user user submissions can be promoted to
> + * Explicit synced user submissions can be promoted to
Same here
> * DMA_RESV_USAGE_READ or DMA_RESV_USAGE_WRITE as needed
> using
> * dma_buf_import_sync_file() when implicit synchronization
> should
> * become necessary after initial adding of the fence.
Thanks,
P.
More information about the dri-devel
mailing list