<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p><br>
</p>
<br>
<div class="moz-cite-prefix">On 2018年08月02日 14:01, Nayan Deshmukh
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAFd4ddyf=EhJ7pmzq3sEGa6U1sQ7Ga7fUH+sW+VKeBxJABjnKQ@mail.gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<div dir="ltr">Hi David,<br>
<div><br>
<div class="gmail_quote">
<div dir="ltr">On Thu, Aug 2, 2018 at 8:22 AM Zhou,
David(ChunMing) <<a href="mailto:David1.Zhou@amd.com"
moz-do-not-send="true">David1.Zhou@amd.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div lang="EN-US">
<div class="gmail-m_963201938271036718WordSection1">
<p class="MsoNormal">Another big question:</p>
<p class="MsoNormal">I agree the general idea is good
to balance scheduler load for same ring family.</p>
<p class="MsoNormal">But, when same entity job run on
different scheduler, that means the later job could
be completed ahead of front, Right?</p>
</div>
</div>
</blockquote>
<div>Really good question. To avoid this senario we do not
move an entity which already has a job in the hardware
queue. We only move entities whose last_scheduled fence
has been signalled which means that the last submitted job
of this entity has finished executing. <br>
</div>
</div>
</div>
</div>
</blockquote>
Good handling I missed when reviewing them.<br>
<br>
Cheers,<br>
David Zhou<br>
<blockquote type="cite"
cite="mid:CAFd4ddyf=EhJ7pmzq3sEGa6U1sQ7Ga7fUH+sW+VKeBxJABjnKQ@mail.gmail.com">
<div dir="ltr">
<div>
<div class="gmail_quote">
<div><br>
</div>
<div>Moving an entity which already has a job in the
hardware queue will hinder the dependency optimization
that we are using and hence will not anyway lead to a
better performance. I have talked about the issue in more
detail here [1]. Please let me know if you have any more
doubts regarding this.<br>
<br>
</div>
<div>Cheers,<br>
</div>
<div>Nayan <br>
<br>
[1] <a
href="http://ndesh26.github.io/gsoc/2018/06/14/GSoC-Update-A-Curious-Case-of-Dependency-Handling/"
moz-do-not-send="true">http://ndesh26.github.io/gsoc/2018/06/14/GSoC-Update-A-Curious-Case-of-Dependency-Handling/</a><br>
<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div lang="EN-US">
<div class="gmail-m_963201938271036718WordSection1">
<p class="MsoNormal">That will break fence design,
later fence must be signaled after front fence in
same fence context.</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">Anything I missed?</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal">Regards,</p>
<p class="MsoNormal">David Zhou</p>
<p class="MsoNormal"> </p>
<p class="MsoNormal"><b>From:</b> dri-devel <<a
href="mailto:dri-devel-bounces@lists.freedesktop.org"
target="_blank" moz-do-not-send="true">dri-devel-bounces@lists.freedesktop.org</a>>
<b>On Behalf Of </b>Nayan Deshmukh<br>
<b>Sent:</b> Thursday, August 02, 2018 12:07 AM<br>
<b>To:</b> Grodzovsky, Andrey <<a
href="mailto:Andrey.Grodzovsky@amd.com"
target="_blank" moz-do-not-send="true">Andrey.Grodzovsky@amd.com</a>><br>
<b>Cc:</b> <a
href="mailto:amd-gfx@lists.freedesktop.org"
target="_blank" moz-do-not-send="true">amd-gfx@lists.freedesktop.org</a>;
Maling list - DRI developers <<a
href="mailto:dri-devel@lists.freedesktop.org"
target="_blank" moz-do-not-send="true">dri-devel@lists.freedesktop.org</a>>;
Koenig, Christian <<a
href="mailto:Christian.Koenig@amd.com"
target="_blank" moz-do-not-send="true">Christian.Koenig@amd.com</a>><br>
<b>Subject:</b> Re: [PATCH 3/4] drm/scheduler: add
new function to get least loaded sched v2</p>
<p class="MsoNormal"> </p>
<div>
<p class="MsoNormal">Yes, that is correct. </p>
<div>
<p class="MsoNormal"> </p>
</div>
<div>
<p class="MsoNormal">Nayan</p>
</div>
</div>
<p class="MsoNormal"> </p>
<div>
<div>
<p class="MsoNormal">On Wed, Aug 1, 2018, 9:05 PM
Andrey Grodzovsky <<a
href="mailto:Andrey.Grodzovsky@amd.com"
target="_blank" moz-do-not-send="true">Andrey.Grodzovsky@amd.com</a>>
wrote:</p>
</div>
<blockquote style="border-color:currentcolor
currentcolor currentcolor
rgb(204,204,204);border-style:none none none
solid;border-width:medium medium medium
1pt;padding:0in 0in 0in
6pt;margin-left:4.8pt;margin-right:0in">
<p class="MsoNormal" style="margin-bottom:12pt">Clarification
question - if the run queues belong to
different
<br>
schedulers they effectively point to different
rings,<br>
<br>
it means we allow to move (reschedule) a
drm_sched_entity from one ring <br>
to another - i assume that the idea int the
first place, that<br>
<br>
you have a set of HW rings and you can utilize
any of them for your jobs <br>
(like compute rings). Correct ?<br>
<br>
Andrey<br>
<br>
<br>
On 08/01/2018 04:20 AM, Nayan Deshmukh wrote:<br>
> The function selects the run queue from the
rq_list with the<br>
> least load. The load is decided by the
number of jobs in a<br>
> scheduler.<br>
><br>
> v2: avoid using atomic read twice
consecutively, instead store<br>
> it locally<br>
><br>
> Signed-off-by: Nayan Deshmukh <<a
href="mailto:nayan26deshmukh@gmail.com"
target="_blank" moz-do-not-send="true">nayan26deshmukh@gmail.com</a>><br>
> ---<br>
> drivers/gpu/drm/scheduler/gpu_scheduler.c
| 25 +++++++++++++++++++++++++<br>
> 1 file changed, 25 insertions(+)<br>
><br>
> diff --git
a/drivers/gpu/drm/scheduler/gpu_scheduler.c
b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br>
> index 375f6f7f6a93..fb4e542660b0 100644<br>
> ---
a/drivers/gpu/drm/scheduler/gpu_scheduler.c<br>
> +++
b/drivers/gpu/drm/scheduler/gpu_scheduler.c<br>
> @@ -255,6 +255,31 @@ static bool
drm_sched_entity_is_ready(struct
drm_sched_entity *entity)<br>
> return true;<br>
> }<br>
> <br>
> +/**<br>
> + * drm_sched_entity_get_free_sched - Get
the rq from rq_list with least load<br>
> + *<br>
> + * @entity: scheduler entity<br>
> + *<br>
> + * Return the pointer to the rq with least
load.<br>
> + */<br>
> +static struct drm_sched_rq *<br>
> +drm_sched_entity_get_free_sched(struct
drm_sched_entity *entity)<br>
> +{<br>
> + struct drm_sched_rq *rq = NULL;<br>
> + unsigned int min_jobs = UINT_MAX,
num_jobs;<br>
> + int i;<br>
> +<br>
> + for (i = 0; i <
entity->num_rq_list; ++i) {<br>
> + num_jobs =
atomic_read(&entity->rq_list[i]->sched->num_jobs);<br>
> + if (num_jobs < min_jobs) {<br>
> + min_jobs = num_jobs;<br>
> + rq =
entity->rq_list[i];<br>
> + }<br>
> + }<br>
> +<br>
> + return rq;<br>
> +}<br>
> +<br>
> static void
drm_sched_entity_kill_jobs_cb(struct dma_fence
*f,<br>
> struct
dma_fence_cb *cb)<br>
> {</p>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
</body>
</html>