<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
</head>
<body>
<p style="font-family:Arial;font-size:10pt;color:#0078D7;margin:15pt;" align="Left">
[AMD Official Use Only - Internal Distribution Only]<br>
</p>
<br>
<div>
<div style="color: rgb(33, 33, 33); background-color: rgb(255, 255, 255); text-align: left;" dir="auto">
Thanks Christian, </div>
<div style="color: rgb(33, 33, 33); background-color: rgb(255, 255, 255); text-align: left;" dir="auto">
<br>
</div>
<div id="ms-outlook-mobile-signature" dir="auto" style="text-align: left;">
<div><br>
</div>
I will modify and resend.</div>
<div id="ms-outlook-mobile-signature" dir="auto" style="text-align: left;"><br>
</div>
<div id="ms-outlook-mobile-signature" dir="auto" style="text-align: left;">Regards, </div>
<div id="ms-outlook-mobile-signature" dir="auto" style="text-align: left;">Nirmoy </div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Koenig, Christian <Christian.Koenig@amd.com><br>
<b>Sent:</b> Thursday, December 5, 2019 1:29:49 PM<br>
<b>To:</b> Das, Nirmoy <Nirmoy.Das@amd.com>; Nirmoy Das <nirmoy.aiemd@gmail.com>; Deucher, Alexander <Alexander.Deucher@amd.com>; Ho, Kenny <Kenny.Ho@amd.com><br>
<b>Cc:</b> amd-gfx@lists.freedesktop.org <amd-gfx@lists.freedesktop.org>; Das, Nirmoy <Nirmoy.Das@amd.com><br>
<b>Subject:</b> Re: [RFC PATCH] drm/scheduler: rework entity creation</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">Am 05.12.19 um 12:04 schrieb Nirmoy:<br>
> Hi Christian,<br>
><br>
> I am not exactly sure about drm_sched_entity_set_priority() I wonder <br>
> if just changing<br>
><br>
> entity->priority to ctx->override_priority should work. With this <br>
> change drm_sched_entity_select_rq()<br>
><br>
> will chose a rq based on entity->priority which seems to me correct. <br>
> But is this enough to fix the old bug you were<br>
><br>
> talking about which mess up already scheduled job on priority change?<br>
<br>
Yes, that should perfectly do it.<br>
<br>
><br>
> okay I just realized I need a lock to make sure<br>
><br>
> drm_sched_entity_set_priority() and drm_sched_entity_select_rq() <br>
> shouldn't happen at the same time.<br>
<br>
Yeah, you probably need to grab the lock and make sure that you get the <br>
priority to use while holding the lock as well.<br>
<br>
Regards,<br>
Christian.<br>
<br>
><br>
><br>
> Regards,<br>
><br>
> Nirmoy<br>
><br>
><br>
> On 12/5/19 11:52 AM, Nirmoy Das wrote:<br>
>> Entity currently keeps a copy of run_queue list and modify it in<br>
>> drm_sched_entity_set_priority(). Entities shouldn't modify run_queue<br>
>> list. Use drm_gpu_scheduler list instead of drm_sched_rq list<br>
>> in drm_sched_entity struct. In this way we can select a runqueue based<br>
>> on entity/ctx's priority for a drm scheduler.<br>
>><br>
>> Signed-off-by: Nirmoy Das <nirmoy.das@amd.com><br>
>> ---<br>
>> drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 7 +--<br>
>> drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 7 +--<br>
>> drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 7 +--<br>
>> drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 7 +--<br>
>> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 14 +++--<br>
>> drivers/gpu/drm/etnaviv/etnaviv_drv.c | 8 +--<br>
>> drivers/gpu/drm/lima/lima_sched.c | 5 +-<br>
>> drivers/gpu/drm/panfrost/panfrost_job.c | 7 +--<br>
>> drivers/gpu/drm/scheduler/sched_entity.c | 65 +++++++++---------------<br>
>> drivers/gpu/drm/v3d/v3d_drv.c | 7 +--<br>
>> include/drm/gpu_scheduler.h | 9 ++--<br>
>> 11 files changed, 69 insertions(+), 74 deletions(-)<br>
>><br>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c <br>
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
>> index a0d3d7b756eb..e8f46c13d073 100644<br>
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
>> @@ -122,7 +122,7 @@ static int amdgpu_ctx_init(struct amdgpu_device <br>
>> *adev,<br>
>> for (i = 0; i < AMDGPU_HW_IP_NUM; ++i) {<br>
>> struct amdgpu_ring *rings[AMDGPU_MAX_RINGS];<br>
>> - struct drm_sched_rq *rqs[AMDGPU_MAX_RINGS];<br>
>> + struct drm_gpu_scheduler *sched_list[AMDGPU_MAX_RINGS];<br>
>> unsigned num_rings = 0;<br>
>> unsigned num_rqs = 0;<br>
>> @@ -181,12 +181,13 @@ static int amdgpu_ctx_init(struct <br>
>> amdgpu_device *adev,<br>
>> if (!rings[j]->adev)<br>
>> continue;<br>
>> - rqs[num_rqs++] = &rings[j]->sched.sched_rq[priority];<br>
>> + sched_list[num_rqs++] = &rings[j]->sched;<br>
>> }<br>
>> for (j = 0; j < amdgpu_ctx_num_entities[i]; ++j)<br>
>> r = drm_sched_entity_init(&ctx->entities[i][j].entity,<br>
>> - rqs, num_rqs, &ctx->guilty);<br>
>> + sched_list, num_rqs,<br>
>> + &ctx->guilty, priority);<br>
>> if (r)<br>
>> goto error_cleanup_entities;<br>
>> }<br>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c <br>
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c<br>
>> index 19ffe00d9072..a960dd7c0711 100644<br>
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c<br>
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c<br>
>> @@ -1957,11 +1957,12 @@ void <br>
>> amdgpu_ttm_set_buffer_funcs_status(struct amdgpu_device *adev, bool <br>
>> enable)<br>
>> if (enable) {<br>
>> struct amdgpu_ring *ring;<br>
>> - struct drm_sched_rq *rq;<br>
>> + struct drm_gpu_scheduler *sched;<br>
>> ring = adev->mman.buffer_funcs_ring;<br>
>> - rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_KERNEL];<br>
>> - r = drm_sched_entity_init(&adev->mman.entity, &rq, 1, NULL);<br>
>> + sched = &ring->sched;<br>
>> + r = drm_sched_entity_init(&adev->mman.entity, &sched,<br>
>> + 1, NULL, DRM_SCHED_PRIORITY_KERNEL);<br>
>> if (r) {<br>
>> DRM_ERROR("Failed setting up TTM BO move entity (%d)\n",<br>
>> r);<br>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c <br>
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c<br>
>> index e324bfe6c58f..b803a8882864 100644<br>
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c<br>
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c<br>
>> @@ -330,12 +330,13 @@ int amdgpu_uvd_sw_fini(struct amdgpu_device *adev)<br>
>> int amdgpu_uvd_entity_init(struct amdgpu_device *adev)<br>
>> {<br>
>> struct amdgpu_ring *ring;<br>
>> - struct drm_sched_rq *rq;<br>
>> + struct drm_gpu_scheduler *sched;<br>
>> int r;<br>
>> ring = &adev->uvd.inst[0].ring;<br>
>> - rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];<br>
>> - r = drm_sched_entity_init(&adev->uvd.entity, &rq, 1, NULL);<br>
>> + sched = &ring->sched;<br>
>> + r = drm_sched_entity_init(&adev->uvd.entity, &sched,<br>
>> + 1, NULL, DRM_SCHED_PRIORITY_NORMAL);<br>
>> if (r) {<br>
>> DRM_ERROR("Failed setting up UVD kernel entity.\n");<br>
>> return r;<br>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c <br>
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c<br>
>> index 46b590af2fd2..b44f28d44fb4 100644<br>
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c<br>
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c<br>
>> @@ -240,12 +240,13 @@ int amdgpu_vce_sw_fini(struct amdgpu_device *adev)<br>
>> int amdgpu_vce_entity_init(struct amdgpu_device *adev)<br>
>> {<br>
>> struct amdgpu_ring *ring;<br>
>> - struct drm_sched_rq *rq;<br>
>> + struct drm_gpu_scheduler *sched;<br>
>> int r;<br>
>> ring = &adev->vce.ring[0];<br>
>> - rq = &ring->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];<br>
>> - r = drm_sched_entity_init(&adev->vce.entity, &rq, 1, NULL);<br>
>> + sched = &ring->sched;<br>
>> + r = drm_sched_entity_init(&adev->vce.entity, &sched,<br>
>> + 1, NULL, DRM_SCHED_PRIORITY_NORMAL);<br>
>> if (r != 0) {<br>
>> DRM_ERROR("Failed setting up VCE run queue.\n");<br>
>> return r;<br>
>> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c <br>
>> b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
>> index a94c4faa5af1..ec6141773a92 100644<br>
>> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
>> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
>> @@ -2687,6 +2687,7 @@ int amdgpu_vm_init(struct amdgpu_device *adev, <br>
>> struct amdgpu_vm *vm,<br>
>> {<br>
>> struct amdgpu_bo_param bp;<br>
>> struct amdgpu_bo *root;<br>
>> + struct drm_gpu_scheduler *sched_list[AMDGPU_MAX_RINGS];<br>
>> int r, i;<br>
>> vm->va = RB_ROOT_CACHED;<br>
>> @@ -2700,14 +2701,19 @@ int amdgpu_vm_init(struct amdgpu_device <br>
>> *adev, struct amdgpu_vm *vm,<br>
>> spin_lock_init(&vm->invalidated_lock);<br>
>> INIT_LIST_HEAD(&vm->freed);<br>
>> + for (i = 0; i < adev->vm_manager.vm_pte_num_rqs; i++)<br>
>> + sched_list[i] = adev->vm_manager.vm_pte_rqs[i]->sched;<br>
>> +<br>
>> /* create scheduler entities for page table updates */<br>
>> - r = drm_sched_entity_init(&vm->direct, adev->vm_manager.vm_pte_rqs,<br>
>> - adev->vm_manager.vm_pte_num_rqs, NULL);<br>
>> + r = drm_sched_entity_init(&vm->direct, sched_list,<br>
>> + adev->vm_manager.vm_pte_num_rqs,<br>
>> + NULL, DRM_SCHED_PRIORITY_KERNEL);<br>
>> if (r)<br>
>> return r;<br>
>> - r = drm_sched_entity_init(&vm->delayed, <br>
>> adev->vm_manager.vm_pte_rqs,<br>
>> - adev->vm_manager.vm_pte_num_rqs, NULL);<br>
>> + r = drm_sched_entity_init(&vm->delayed, sched_list,<br>
>> + adev->vm_manager.vm_pte_num_rqs,<br>
>> + NULL, DRM_SCHED_PRIORITY_KERNEL);<br>
>> if (r)<br>
>> goto error_free_direct;<br>
>> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c <br>
>> b/drivers/gpu/drm/etnaviv/etnaviv_drv.c<br>
>> index 1f9c01be40d7..a65c1e115e35 100644<br>
>> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c<br>
>> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c<br>
>> @@ -65,12 +65,12 @@ static int etnaviv_open(struct drm_device *dev, <br>
>> struct drm_file *file)<br>
>> for (i = 0; i < ETNA_MAX_PIPES; i++) {<br>
>> struct etnaviv_gpu *gpu = priv->gpu[i];<br>
>> - struct drm_sched_rq *rq;<br>
>> + struct drm_gpu_scheduler *sched;<br>
>> if (gpu) {<br>
>> - rq = &gpu->sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];<br>
>> - drm_sched_entity_init(&ctx->sched_entity[i],<br>
>> - &rq, 1, NULL);<br>
>> + sched = &gpu->sched;<br>
>> + drm_sched_entity_init(&ctx->sched_entity[i], &sched,<br>
>> + 1, NULL, DRM_SCHED_PRIORITY_NORMAL);<br>
>> }<br>
>> }<br>
>> diff --git a/drivers/gpu/drm/lima/lima_sched.c <br>
>> b/drivers/gpu/drm/lima/lima_sched.c<br>
>> index f522c5f99729..a7e53878d841 100644<br>
>> --- a/drivers/gpu/drm/lima/lima_sched.c<br>
>> +++ b/drivers/gpu/drm/lima/lima_sched.c<br>
>> @@ -159,9 +159,10 @@ int lima_sched_context_init(struct <br>
>> lima_sched_pipe *pipe,<br>
>> struct lima_sched_context *context,<br>
>> atomic_t *guilty)<br>
>> {<br>
>> - struct drm_sched_rq *rq = pipe->base.sched_rq + <br>
>> DRM_SCHED_PRIORITY_NORMAL;<br>
>> + struct drm_gpu_scheduler *sched = &pipe->base;<br>
>> - return drm_sched_entity_init(&context->base, &rq, 1, guilty);<br>
>> + return drm_sched_entity_init(&context->base, &sched,<br>
>> + 1, guilty, DRM_SCHED_PRIORITY_NORMAL);<br>
>> }<br>
>> void lima_sched_context_fini(struct lima_sched_pipe *pipe,<br>
>> diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c <br>
>> b/drivers/gpu/drm/panfrost/panfrost_job.c<br>
>> index d411eb6c8eb9..84178bcf35c9 100644<br>
>> --- a/drivers/gpu/drm/panfrost/panfrost_job.c<br>
>> +++ b/drivers/gpu/drm/panfrost/panfrost_job.c<br>
>> @@ -542,12 +542,13 @@ int panfrost_job_open(struct panfrost_file_priv <br>
>> *panfrost_priv)<br>
>> {<br>
>> struct panfrost_device *pfdev = panfrost_priv->pfdev;<br>
>> struct panfrost_job_slot *js = pfdev->js;<br>
>> - struct drm_sched_rq *rq;<br>
>> + struct drm_gpu_scheduler *sched;<br>
>> int ret, i;<br>
>> for (i = 0; i < NUM_JOB_SLOTS; i++) {<br>
>> - rq = &js->queue[i].sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];<br>
>> - ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i], <br>
>> &rq, 1, NULL);<br>
>> + sched = &js->queue[i].sched;<br>
>> + ret = drm_sched_entity_init(&panfrost_priv->sched_entity[i],<br>
>> + &sched, 1, NULL, DRM_SCHED_PRIORITY_NORMAL);<br>
>> if (WARN_ON(ret))<br>
>> return ret;<br>
>> }<br>
>> diff --git a/drivers/gpu/drm/scheduler/sched_entity.c <br>
>> b/drivers/gpu/drm/scheduler/sched_entity.c<br>
>> index 461a7a8129f4..e10d37266836 100644<br>
>> --- a/drivers/gpu/drm/scheduler/sched_entity.c<br>
>> +++ b/drivers/gpu/drm/scheduler/sched_entity.c<br>
>> @@ -38,9 +38,9 @@<br>
>> * submit to HW ring.<br>
>> *<br>
>> * @entity: scheduler entity to init<br>
>> - * @rq_list: the list of run queue on which jobs from this<br>
>> + * @sched_list: the list of drm scheds on which jobs from this<br>
>> * entity can be submitted<br>
>> - * @num_rq_list: number of run queue in rq_list<br>
>> + * @num_sched_list: number of drm sched in sched_list<br>
>> * @guilty: atomic_t set to 1 when a job on this queue<br>
>> * is found to be guilty causing a timeout<br>
>> *<br>
>> @@ -50,32 +50,34 @@<br>
>> * Returns 0 on success or a negative error code on failure.<br>
>> */<br>
>> int drm_sched_entity_init(struct drm_sched_entity *entity,<br>
>> - struct drm_sched_rq **rq_list,<br>
>> - unsigned int num_rq_list,<br>
>> - atomic_t *guilty)<br>
>> + struct drm_gpu_scheduler **sched_list,<br>
>> + unsigned int num_sched_list,<br>
>> + atomic_t *guilty, enum drm_sched_priority priority)<br>
>> {<br>
>> int i;<br>
>> - if (!(entity && rq_list && (num_rq_list == 0 || rq_list[0])))<br>
>> + if (!(entity && sched_list && (num_sched_list == 0 || <br>
>> sched_list[0])))<br>
>> return -EINVAL;<br>
>> memset(entity, 0, sizeof(struct drm_sched_entity));<br>
>> INIT_LIST_HEAD(&entity->list);<br>
>> entity->rq = NULL;<br>
>> entity->guilty = guilty;<br>
>> - entity->num_rq_list = num_rq_list;<br>
>> - entity->rq_list = kcalloc(num_rq_list, sizeof(struct <br>
>> drm_sched_rq *),<br>
>> - GFP_KERNEL);<br>
>> - if (!entity->rq_list)<br>
>> + entity->num_sched_list = num_sched_list;<br>
>> + entity->priority = priority;<br>
>> + entity->sched_list = kcalloc(num_sched_list,<br>
>> + sizeof(struct drm_gpu_scheduler *), GFP_KERNEL);<br>
>> +<br>
>> + if(!entity->sched_list)<br>
>> return -ENOMEM;<br>
>> init_completion(&entity->entity_idle);<br>
>> - for (i = 0; i < num_rq_list; ++i)<br>
>> - entity->rq_list[i] = rq_list[i];<br>
>> + for (i = 0; i < num_sched_list; i++)<br>
>> + entity->sched_list[i] = sched_list[i];<br>
>> - if (num_rq_list)<br>
>> - entity->rq = rq_list[0];<br>
>> + if (num_sched_list)<br>
>> + entity->rq = <br>
>> &entity->sched_list[0]->sched_rq[entity->priority];<br>
>> entity->last_scheduled = NULL;<br>
>> @@ -139,10 +141,10 @@ drm_sched_entity_get_free_sched(struct <br>
>> drm_sched_entity *entity)<br>
>> unsigned int min_jobs = UINT_MAX, num_jobs;<br>
>> int i;<br>
>> - for (i = 0; i < entity->num_rq_list; ++i) {<br>
>> - struct drm_gpu_scheduler *sched = entity->rq_list[i]->sched;<br>
>> + for (i = 0; i < entity->num_sched_list; ++i) {<br>
>> + struct drm_gpu_scheduler *sched = entity->sched_list[i];<br>
>> - if (!entity->rq_list[i]->sched->ready) {<br>
>> + if (!entity->sched_list[i]->ready) {<br>
>> DRM_WARN("sched%s is not ready, skipping", sched->name);<br>
>> continue;<br>
>> }<br>
>> @@ -150,7 +152,7 @@ drm_sched_entity_get_free_sched(struct <br>
>> drm_sched_entity *entity)<br>
>> num_jobs = atomic_read(&sched->num_jobs);<br>
>> if (num_jobs < min_jobs) {<br>
>> min_jobs = num_jobs;<br>
>> - rq = entity->rq_list[i];<br>
>> + rq = &entity->sched_list[i]->sched_rq[entity->priority];<br>
>> }<br>
>> }<br>
>> @@ -308,7 +310,7 @@ void drm_sched_entity_fini(struct <br>
>> drm_sched_entity *entity)<br>
>> dma_fence_put(entity->last_scheduled);<br>
>> entity->last_scheduled = NULL;<br>
>> - kfree(entity->rq_list);<br>
>> + kfree(entity->sched_list);<br>
>> }<br>
>> EXPORT_SYMBOL(drm_sched_entity_fini);<br>
>> @@ -353,15 +355,6 @@ static void drm_sched_entity_wakeup(struct <br>
>> dma_fence *f,<br>
>> drm_sched_wakeup(entity->rq->sched);<br>
>> }<br>
>> -/**<br>
>> - * drm_sched_entity_set_rq_priority - helper for <br>
>> drm_sched_entity_set_priority<br>
>> - */<br>
>> -static void drm_sched_entity_set_rq_priority(struct drm_sched_rq **rq,<br>
>> - enum drm_sched_priority priority)<br>
>> -{<br>
>> - *rq = &(*rq)->sched->sched_rq[priority];<br>
>> -}<br>
>> -<br>
>> /**<br>
>> * drm_sched_entity_set_priority - Sets priority of the entity<br>
>> *<br>
>> @@ -373,20 +366,8 @@ static void <br>
>> drm_sched_entity_set_rq_priority(struct drm_sched_rq **rq,<br>
>> void drm_sched_entity_set_priority(struct drm_sched_entity *entity,<br>
>> enum drm_sched_priority priority)<br>
>> {<br>
>> - unsigned int i;<br>
>> -<br>
>> - spin_lock(&entity->rq_lock);<br>
>> - for (i = 0; i < entity->num_rq_list; ++i)<br>
>> - drm_sched_entity_set_rq_priority(&entity->rq_list[i], priority);<br>
>> -<br>
>> - if (entity->rq) {<br>
>> - drm_sched_rq_remove_entity(entity->rq, entity);<br>
>> - drm_sched_entity_set_rq_priority(&entity->rq, priority);<br>
>> - drm_sched_rq_add_entity(entity->rq, entity);<br>
>> - }<br>
>> -<br>
>> - spin_unlock(&entity->rq_lock);<br>
>> + entity->priority = priority;<br>
>> }<br>
>> EXPORT_SYMBOL(drm_sched_entity_set_priority);<br>
>> @@ -490,7 +471,7 @@ void drm_sched_entity_select_rq(struct <br>
>> drm_sched_entity *entity)<br>
>> struct dma_fence *fence;<br>
>> struct drm_sched_rq *rq;<br>
>> - if (spsc_queue_count(&entity->job_queue) || <br>
>> entity->num_rq_list <= 1)<br>
>> + if (spsc_queue_count(&entity->job_queue) || <br>
>> entity->num_sched_list <= 1)<br>
>> return;<br>
>> fence = READ_ONCE(entity->last_scheduled);<br>
>> diff --git a/drivers/gpu/drm/v3d/v3d_drv.c <br>
>> b/drivers/gpu/drm/v3d/v3d_drv.c<br>
>> index 1a07462b4528..c6aff1aedd27 100644<br>
>> --- a/drivers/gpu/drm/v3d/v3d_drv.c<br>
>> +++ b/drivers/gpu/drm/v3d/v3d_drv.c<br>
>> @@ -140,7 +140,7 @@ v3d_open(struct drm_device *dev, struct drm_file <br>
>> *file)<br>
>> {<br>
>> struct v3d_dev *v3d = to_v3d_dev(dev);<br>
>> struct v3d_file_priv *v3d_priv;<br>
>> - struct drm_sched_rq *rq;<br>
>> + struct drm_gpu_scheduler *sched;<br>
>> int i;<br>
>> v3d_priv = kzalloc(sizeof(*v3d_priv), GFP_KERNEL);<br>
>> @@ -150,8 +150,9 @@ v3d_open(struct drm_device *dev, struct drm_file <br>
>> *file)<br>
>> v3d_priv->v3d = v3d;<br>
>> for (i = 0; i < V3D_MAX_QUEUES; i++) {<br>
>> - rq = &v3d->queue[i].sched.sched_rq[DRM_SCHED_PRIORITY_NORMAL];<br>
>> - drm_sched_entity_init(&v3d_priv->sched_entity[i], &rq, 1, NULL);<br>
>> + sched = &v3d->queue[i].sched;<br>
>> + drm_sched_entity_init(&v3d_priv->sched_entity[i], &sched,<br>
>> + 1, NULL, DRM_SCHED_PRIORITY_NORMAL);<br>
>> }<br>
>> file->driver_priv = v3d_priv;<br>
>> diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h<br>
>> index 684692a8ed76..9df322dfac30 100644<br>
>> --- a/include/drm/gpu_scheduler.h<br>
>> +++ b/include/drm/gpu_scheduler.h<br>
>> @@ -81,8 +81,9 @@ enum drm_sched_priority {<br>
>> struct drm_sched_entity {<br>
>> struct list_head list;<br>
>> struct drm_sched_rq *rq;<br>
>> - struct drm_sched_rq **rq_list;<br>
>> - unsigned int num_rq_list;<br>
>> + unsigned int num_sched_list;<br>
>> + struct drm_gpu_scheduler **sched_list;<br>
>> + enum drm_sched_priority priority;<br>
>> spinlock_t rq_lock;<br>
>> struct spsc_queue job_queue;<br>
>> @@ -312,9 +313,9 @@ void drm_sched_rq_remove_entity(struct <br>
>> drm_sched_rq *rq,<br>
>> struct drm_sched_entity *entity);<br>
>> int drm_sched_entity_init(struct drm_sched_entity *entity,<br>
>> - struct drm_sched_rq **rq_list,<br>
>> + struct drm_gpu_scheduler **sched_list,<br>
>> unsigned int num_rq_list,<br>
>> - atomic_t *guilty);<br>
>> + atomic_t *guilty, enum drm_sched_priority priority);<br>
>> long drm_sched_entity_flush(struct drm_sched_entity *entity, long <br>
>> timeout);<br>
>> void drm_sched_entity_fini(struct drm_sched_entity *entity);<br>
>> void drm_sched_entity_destroy(struct drm_sched_entity *entity);<br>
<br>
</div>
</span></font></div>
</div>
</body>
</html>