<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">
      <blockquote type="cite">
        <p class="MsoNormal">[ML] if the job complete, the job’s sched
          fence callback will take this spin_lock and remove itself from
          mirror_list, so we are still safe to call
          amd_sched_job_kickout(), and it will do nothing if so<o:p></o:p></p>
      </blockquote>
      Indeed, but I still think that this is a bad approach cause we
      then reset the hardware without a good reason.<br>
      <br>
      <blockquote type="cite">Besides, original logic also force
        complete the hw fence, and it works well …
        <o:p></o:p>
      </blockquote>
      Force completion is not so much of the issue, but rather in which
      order you do things.<br>
      <br>
      See the original code first stops the scheduler and removes the
      connection between hardware fence and job in an atomic manner. And
      THEN forces the hardware fence to complete.<br>
      <br>
      This way we could be sure that nothing happened in parallel, e.g.
      that we don't try to signal the fence twice or something like
      that.<br>
      <br>
      <blockquote type="cite">State like “You are missing that it is
        entirely possible that the job will complete while we are trying
        to kick it out.”</blockquote>
      Sorry I should have been more clear.<br>
      <br>
      <blockquote type="cite">Is not a good reason to reject my
        approach, because that is okay if the job just completed …<o:p></o:p>
      </blockquote>
      We usually try to take a defensive approach, so stopping
      everything, removing the hardware fence connection and then
      explicitly kicking out the job in question sounds much better than
      doing it implicitly with the hardware fence completion.<br>
      <br>
      Even when this works (which I'm still not sure of) that is a
      really awkward and hard to understand approach.<br>
      <br>
      Regards,<br>
      Christian.<br>
      <br>
      Am 09.05.2017 um 13:58 schrieb Liu, Monk:<br>
    </div>
    <blockquote
cite="mid:DM5PR12MB161042624A3F706EFA8ADF2384EF0@DM5PR12MB1610.namprd12.prod.outlook.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <meta name="Generator" content="Microsoft Word 15 (filtered
        medium)">
      <style><!--
/* Font Definitions */
@font-face
        {font-family:SimSun;
        panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
        {font-family:"Cambria Math";
        panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
        {font-family:Calibri;
        panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
        {font-family:"\@SimSun";
        panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
        {margin:0in;
        margin-bottom:.0001pt;
        font-size:12.0pt;
        font-family:"Times New Roman",serif;
        color:black;}
a:link, span.MsoHyperlink
        {mso-style-priority:99;
        color:blue;
        text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
        {mso-style-priority:99;
        color:purple;
        text-decoration:underline;}
p
        {mso-style-priority:99;
        mso-margin-top-alt:auto;
        margin-right:0in;
        mso-margin-bottom-alt:auto;
        margin-left:0in;
        font-size:12.0pt;
        font-family:"Times New Roman",serif;
        color:black;}
p.emailquote, li.emailquote, div.emailquote
        {mso-style-name:emailquote;
        mso-margin-top-alt:auto;
        margin-right:0in;
        mso-margin-bottom-alt:auto;
        margin-left:1.0pt;
        border:none;
        padding:0in;
        font-size:12.0pt;
        font-family:"Times New Roman",serif;
        color:black;}
span.EmailStyle19
        {mso-style-type:personal-reply;
        font-family:"Calibri",sans-serif;
        color:#1F497D;}
.MsoChpDefault
        {mso-style-type:export-only;
        font-size:10.0pt;}
@page WordSection1
        {size:8.5in 11.0in;
        margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
        {page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
      <div class="WordSection1">
        <p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
        <p class="MsoNormal">You are missing that it is entirely
          possible that the job will complete while we are trying to
          kick it out.<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">[ML] if the job complete, the job’s sched
          fence callback will take this spin_lock and remove itself from
          mirror_list, so we are still safe to call
          amd_sched_job_kickout(), and it will do nothing if so<o:p></o:p></p>
        <p class="MsoNormal">Please go through the whole steps again,<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">Besides, original logic also force complete
          the hw fence, and it works well …
          <o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">I don’t see the solid reason why you insist
          on your approach, please go through the steps again  and give
          me the details about where is incorrect than I can fix it
          <o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">State like “You are missing that it is
          entirely possible that the job will complete while we are
          trying to kick it out.” Is not a good reason to reject my
          approach, because that is okay if the job just completed …<o:p></o:p></p>
        <p class="MsoNormal"><o:p> </o:p></p>
        <p class="MsoNormal">BR Monk<br>
          <br>
          <o:p></o:p></p>
        <p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:#1F497D"><o:p> </o:p></span></p>
        <div>
          <div style="border:none;border-top:solid #E1E1E1
            1.0pt;padding:3.0pt 0in 0in 0in">
            <p class="MsoNormal"><b><span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">From:</span></b><span
style="font-size:11.0pt;font-family:"Calibri",sans-serif;color:windowtext">
                Koenig, Christian
                <br>
                <b>Sent:</b> Tuesday, May 09, 2017 3:49 PM<br>
                <b>To:</b> Liu, Monk <a class="moz-txt-link-rfc2396E" href="mailto:Monk.Liu@amd.com"><Monk.Liu@amd.com></a>; Christian
                König <a class="moz-txt-link-rfc2396E" href="mailto:deathsimple@vodafone.de"><deathsimple@vodafone.de></a>;
                <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
                <b>Subject:</b> Re: [PATCH 4/4]
                drm/amdgpu/SRIOV:implement guilty job TDR (V2)<o:p></o:p></span></p>
          </div>
        </div>
        <p class="MsoNormal"><o:p> </o:p></p>
        <div>
          <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
            <p class="MsoNormal"><span style="font-size:10.0pt">[ML]
                Really not necessary, we have spin_lock to protect the
                mirror-list, nothing will be messed up ...</span><o:p></o:p></p>
          </blockquote>
          <p class="MsoNormal">You are missing that it is entirely
            possible that the job will complete while we are trying to
            kick it out.<br>
            <br>
            <br>
            <o:p></o:p></p>
          <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
            <p class="MsoNormal"><span style="font-size:10.0pt">[ML] why
                don't touch hardware fence at all ?  the
                original/bare-metal gpu reset also signal all ring's
                hardware fence first, I just follow the original logic
                ...<br>
                Scheduler fence will be auto signaled after hw fence
                signaled, any problem with that ? what's the concern ?</span><o:p></o:p></p>
          </blockquote>
          <p class="MsoNormal">The hardware runs async to the CPU which
            tries to reset it, so we need to be careful in which order
            things are done.<br>
            <br>
            <br>
            <o:p></o:p></p>
          <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
            <p class="MsoNormal"><span style="font-size:10.0pt">[ML] No
                I don't think so, the kickout must be prior to the
                hw_job_reset, otherwise the scheduler fence callback
                will be removed and you need manually install it later ,
                which is not correct:<br>
                For the guity job, we just kick it out before job reset,
                in job_reset we only reset other innocent jobs( and
                unbind the scheduler fence callback for them), after hw
                fence<br>
                Forcely set to drv_seq, all hw fence are signaled (this
                is the way of original logic, I didn't change that). 
                When go to sched_recovery(), it will recover all
                innocent job and hook<br>
                The scheduler fence with new hw fence.  That way only
                the guilty job is dropped forever.</span><o:p></o:p></p>
          </blockquote>
          <p class="MsoNormal">Again same problem here.<br>
            <br>
            To be absolutely sure that everything works as expected we
            need to do it in the following order:<br>
            <br>
            1. Stop the scheduler from feeding more jobs to the hardware
            when a jobs completes.<br>
            <br>
            2. Then call hw_job_reset to remove the connection between
            job and hardware fence.<br>
            <br>
            3. Test if job is now completed. It is entirely possible
            that the job completed while we started the work to reset
            the hardware.<br>
            <br>
            Removing the connection between the hardware fence and the
            job is the deadline, if the job completes after that it is
            lost.<br>
            <br>
            4. Check the karma and kick out the job if we found it
            guilty.<br>
            <br>
            5. Get the whole stuff working again, e.g. reset the
            hardware, restart the scheduler etc...<br>
            <br>
            Regards,<br>
            Christian.<br>
            <br>
            Am 09.05.2017 um 04:45 schrieb Liu, Monk:<o:p></o:p></p>
        </div>
        <blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
          <div>
            <p class="MsoNormal" style="margin-bottom:12.0pt"><span
                style="font-size:10.0pt">>  
                <br>
                > -     /* block scheduler */<br>
                > -     for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
                > -             ring = adev->rings[i];<br>
                > +     /* we start from the ring trigger GPU hang */<br>
                > +     j = job ? job->ring->idx : 0;<br>
                > +<br>
                > +     if (job)<br>
                > +             if
                (amd_sched_invalidate_job(&job->base,
                amdgpu_job_hang_limit))<br>
                > +                    
                amd_sched_job_kickout(&job->base);<br>
                <br>
                Well that looks like the wrong order to me. We should
                probably stop the scheduler before trying to mess
                anything with the job.<br>
                <br>
                [ML] Really not necessary, we have spin_lock to protect
                the mirror-list, nothing will be messed up ...<br>
                <br>
                <br>
                >   <br>
                > +void
                amdgpu_fence_driver_force_completion_ring(struct
                amdgpu_ring<br>
                > +*ring) {<br>
                > +     if (ring)<br>
                > +             amdgpu_fence_write(ring,
                ring->fence_drv.sync_seq); }<br>
                > +<br>
                <br>
                The coding style is completely off.<br>
                <br>
                [ML] I don't know why at email side it looks wrong
                coding style, but I'm sure it is correct at my side,
                check here:<br>
                void amdgpu_fence_driver_force_completion_ring(struct
                amdgpu_ring *ring)<br>
                {<br>
                        if (ring)<br>
                                amdgpu_fence_write(ring,
                ring->fence_drv.sync_seq);<br>
                }<br>
                <br>
                Additional to that I don't think that this is a good
                idea. We should probably rather just signal all
                scheduler fences instead and don't touch the hardware
                fence at all.<br>
                <br>
                [ML] why don't touch hardware fence at all ?  the
                original/bare-metal gpu reset also signal all ring's
                hardware fence first, I just follow the original logic
                ...<br>
                Scheduler fence will be auto signaled after hw fence
                signaled, any problem with that ? what's the concern ?<br>
                <br>
                <br>
                > -     for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
                > -             struct amdgpu_ring *ring =
                adev->rings[i];<br>
                > +     for (i = j; i < j + AMDGPU_MAX_RINGS; ++i)
                {<br>
                > +             ring = adev->rings[i %
                AMDGPU_MAX_RINGS];<br>
                >                if (!ring || !ring->sched.thread)<br>
                >                        continue;<br>
                >   <br>
                > +             if (job && j != i) {<br>
                > +                    
                kthread_unpark(ring->sched.thread);<br>
                > +                     continue;<br>
                > +             }<br>
                > +<br>
                <br>
                Please split up that patch a bit further. E.g. first the
                handling to only hw_job_reset the ring in question, then
                the kickout handling.<br>
                <br>
                [ML] No I don't think so, the kickout must be prior to
                the hw_job_reset, otherwise the scheduler fence callback
                will be removed and you need manually install it later ,
                which is not correct:<br>
                For the guity job, we just kick it out before job reset,
                in job_reset we only reset other innocent jobs( and
                unbind the scheduler fence callback for them), after hw
                fence<br>
                Forcely set to drv_seq, all hw fence are signaled (this
                is the way of original logic, I didn't change that). 
                When go to sched_recovery(), it will recover all
                innocent job and hook<br>
                The scheduler fence with new hw fence.  That way only
                the guilty job is dropped forever.<br>
                <br>
                -----Original Message-----<br>
                From: Christian König [<a moz-do-not-send="true"
                  href="mailto:deathsimple@vodafone.de">mailto:deathsimple@vodafone.de</a>]
                <br>
                Sent: Monday, May 08, 2017 9:12 PM<br>
                To: Liu, Monk <a moz-do-not-send="true"
                  href="mailto:Monk.Liu@amd.com"><Monk.Liu@amd.com></a>;
                <a moz-do-not-send="true"
                  href="mailto:amd-gfx@lists.freedesktop.org">
                  amd-gfx@lists.freedesktop.org</a><br>
                Cc: Koenig, Christian <a moz-do-not-send="true"
                  href="mailto:Christian.Koenig@amd.com"><Christian.Koenig@amd.com></a><br>
                Subject: Re: [PATCH 4/4] drm/amdgpu/SRIOV:implement
                guilty job TDR (V2)<br>
                <br>
                Am 08.05.2017 um 09:01 schrieb Liu, Monk:<br>
                > @Christian<br>
                ><br>
                > This one is changed to guilty job scheme
                accordingly with your <br>
                > response<br>
                ><br>
                > BR Monk<br>
                ><br>
                > -----Original Message-----<br>
                > From: Monk Liu [<a moz-do-not-send="true"
                  href="mailto:Monk.Liu@amd.com">mailto:Monk.Liu@amd.com</a>]<br>
                > Sent: Monday, May 08, 2017 3:00 PM<br>
                > To: <a moz-do-not-send="true"
                  href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
                > Cc: Liu, Monk <a moz-do-not-send="true"
                  href="mailto:Monk.Liu@amd.com"><Monk.Liu@amd.com></a><br>
                > Subject: [PATCH] drm/amdgpu/SRIOV:implement guilty
                job TDR (V2)<br>
                ><br>
                > 1,TDR will kickout guilty job if it hang exceed the
                threshold of the given one from kernel paramter
                "job_hang_limit", that way a bad command stream will not
                infinitly cause GPU hang.<br>
                ><br>
                > by default this threshold is 1 so a job will be
                kicked out after it hang.<br>
                ><br>
                > 2,if a job timeout TDR routine will not reset all
                sched/ring, instead if will only reset on the givn one
                which is indicated by @job of amdgpu_sriov_gpu_reset,
                that way we don't need to reset and recover each
                sched/ring if we already know which job cause GPU hang.<br>
                ><br>
                > 3,unblock sriov_gpu_reset for AI family.<br>
                > 4,don't init entity for KIQ<br>
                ><br>
                > TODO:<br>
                > when a job is considered as guilty, we should mark
                some flag in its fence status flag, and let UMD side
                aware that this fence signaling is not due to job
                complete but job hang.<br>
                ><br>
                > Change-Id:
                I7b89c19a3de93249db570d0a80522176b1525a09<br>
                > Signed-off-by: Monk Liu <a moz-do-not-send="true"
                  href="mailto:Monk.Liu@amd.com"><Monk.Liu@amd.com></a><br>
                > ---<br>
                >   drivers/gpu/drm/amd/amdgpu/amdgpu.h           | 
                1 +<br>
                >   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c       | 
                4 +++<br>
                >   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    |
                36 ++++++++++++++++++++-------<br>
                >   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c       | 
                4 +++<br>
                >   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c     | 
                6 +++++<br>
                >   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h      | 
                1 +<br>
                >   drivers/gpu/drm/amd/scheduler/gpu_scheduler.c |
                11 +++++++- 
                drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |  7
                ++++++<br>
                >   8 files changed, 60 insertions(+), 10
                deletions(-)<br>
                ><br>
                > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h <br>
                > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h<br>
                > index 90a69bf..93bcea2 100644<br>
                > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h<br>
                > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h<br>
                > @@ -111,6 +111,7 @@ extern int
                amdgpu_prim_buf_per_se;  extern int <br>
                > amdgpu_pos_buf_per_se;  extern int
                amdgpu_cntl_sb_buf_per_se;  extern <br>
                > int amdgpu_param_buf_per_se;<br>
                > +extern int amdgpu_job_hang_limit;<br>
                >   <br>
                >   #define AMDGPU_DEFAULT_GTT_SIZE_MB         
                3072ULL /* 3GB by default */<br>
                >   #define
                AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS              3000<br>
                > diff --git
                a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c <br>
                > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
                > index b4bbbb3..23afc58 100644<br>
                > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
                > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
                > @@ -52,6 +52,10 @@ static int
                amdgpu_ctx_init(struct amdgpu_device *adev, struct
                amdgpu_ctx *ctx)<br>
                >                struct amd_sched_rq *rq;<br>
                >   <br>
                >                rq =
                &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];<br>
                > +<br>
                > +             if (ring ==
                &adev->gfx.kiq.ring)<br>
                > +                     continue;<br>
                > +<br>
                <br>
                That looks like a bug fix and should probably go into a
                separate patch.<br>
                <br>
                >                r =
                amd_sched_entity_init(&ring->sched,
                &ctx->rings[i].entity,<br>
                >                                          rq,
                amdgpu_sched_jobs);<br>
                >                if (r)<br>
                > diff --git
                a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c <br>
                > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c<br>
                > index 0e5f314..f3990fe 100644<br>
                > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c<br>
                > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c<br>
                > @@ -2537,7 +2537,7 @@ static int
                amdgpu_recover_vram_from_shadow(struct amdgpu_device
                *adev,<br>
                >    */<br>
                >   int amdgpu_sriov_gpu_reset(struct amdgpu_device
                *adev, struct amdgpu_job *job)  {<br>
                > -     int i, r = 0;<br>
                > +     int i, j, r = 0;<br>
                >        int resched;<br>
                >        struct amdgpu_bo *bo, *tmp;<br>
                >        struct amdgpu_ring *ring;<br>
                > @@ -2550,19 +2550,30 @@ int
                amdgpu_sriov_gpu_reset(struct amdgpu_device *adev,
                struct amdgpu_job *job)<br>
                >        /* block TTM */<br>
                >        resched =
                ttm_bo_lock_delayed_workqueue(&adev->mman.bdev);<br>
                >   <br>
                > -     /* block scheduler */<br>
                > -     for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
                > -             ring = adev->rings[i];<br>
                > +     /* we start from the ring trigger GPU hang */<br>
                > +     j = job ? job->ring->idx : 0;<br>
                > +<br>
                > +     if (job)<br>
                > +             if
                (amd_sched_invalidate_job(&job->base,
                amdgpu_job_hang_limit))<br>
                > +                    
                amd_sched_job_kickout(&job->base);<br>
                <br>
                Well that looks like the wrong order to me. We should
                probably stop the scheduler before trying to mess
                anything with the job.<br>
                <br>
                >   <br>
                > +     /* block scheduler */<br>
                > +     for (i = j; i < j + AMDGPU_MAX_RINGS; ++i)
                {<br>
                > +             ring = adev->rings[i %
                AMDGPU_MAX_RINGS];<br>
                >                if (!ring || !ring->sched.thread)<br>
                >                        continue;<br>
                >   <br>
                >                kthread_park(ring->sched.thread);<br>
                > +<br>
                > +             if (job && j != i)<br>
                > +                     continue;<br>
                > +<br>
                > +             /* only do job_reset on the hang ring
                if @job not NULL */<br>
                >               
                amd_sched_hw_job_reset(&ring->sched);<br>
                > -     }<br>
                >   <br>
                > -     /* after all hw jobs are reset, hw fence is
                meaningless, so force_completion */<br>
                > -     amdgpu_fence_driver_force_completion(adev);<br>
                > +             /* after all hw jobs are reset, hw
                fence is meaningless, so force_completion */<br>
                > +            
                amdgpu_fence_driver_force_completion_ring(ring);<br>
                > +     }<br>
                >   <br>
                >        /* request to take full control of GPU
                before re-initialization  */<br>
                >        if (job)<br>
                > @@ -2615,11 +2626,16 @@ int
                amdgpu_sriov_gpu_reset(struct amdgpu_device *adev,
                struct amdgpu_job *job)<br>
                >        }<br>
                >        fence_put(fence);<br>
                >   <br>
                > -     for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
                > -             struct amdgpu_ring *ring =
                adev->rings[i];<br>
                > +     for (i = j; i < j + AMDGPU_MAX_RINGS; ++i)
                {<br>
                > +             ring = adev->rings[i %
                AMDGPU_MAX_RINGS];<br>
                >                if (!ring || !ring->sched.thread)<br>
                >                        continue;<br>
                >   <br>
                > +             if (job && j != i) {<br>
                > +                    
                kthread_unpark(ring->sched.thread);<br>
                > +                     continue;<br>
                > +             }<br>
                > +<br>
                <br>
                Please split up that patch a bit further. E.g. first the
                handling to only hw_job_reset the ring in question, then
                the kickout handling.<br>
                <br>
                >               
                amd_sched_job_recovery(&ring->sched);<br>
                >               
                kthread_unpark(ring->sched.thread);<br>
                >        }<br>
                > @@ -2629,6 +2645,8 @@ int
                amdgpu_sriov_gpu_reset(struct amdgpu_device *adev,
                struct amdgpu_job *job)<br>
                >        if (r) {<br>
                >                /* bad news, how to tell it to
                userspace ? */<br>
                >                dev_info(adev->dev, "GPU reset
                failed\n");<br>
                > +     } else {<br>
                > +             dev_info(adev->dev, "GPU reset
                successed!\n");<br>
                >        }<br>
                >   <br>
                >        adev->gfx.in_reset = false;<br>
                > diff --git
                a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c <br>
                > b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c<br>
                > index 416908a..fd3691a8 100644<br>
                > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c<br>
                > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c<br>
                > @@ -112,6 +112,7 @@ int amdgpu_prim_buf_per_se =
                0;  int <br>
                > amdgpu_pos_buf_per_se = 0;  int
                amdgpu_cntl_sb_buf_per_se = 0;  int <br>
                > amdgpu_param_buf_per_se = 0;<br>
                > +int amdgpu_job_hang_limit = 0;<br>
                >   <br>
                >   MODULE_PARM_DESC(vramlimit, "Restrict VRAM for
                testing, in <br>
                > megabytes");  module_param_named(vramlimit,
                amdgpu_vram_limit, int, <br>
                > 0600); @@ -237,6 +238,9 @@
                module_param_named(cntl_sb_buf_per_se, <br>
                > amdgpu_cntl_sb_buf_per_se, int, 0444);  <br>
                > MODULE_PARM_DESC(param_buf_per_se, "the size of
                Off-Chip Pramater <br>
                > Cache per Shader Engine (default depending on
                gfx)");  <br>
                > module_param_named(param_buf_per_se,
                amdgpu_param_buf_per_se, int, <br>
                > 0444);<br>
                >   <br>
                > +MODULE_PARM_DESC(job_hang_limit, "how much time
                allow a job hang and <br>
                > +not drop it (default 0)");
                module_param_named(job_hang_limit,<br>
                > +amdgpu_job_hang_limit, int ,0444);<br>
                > +<br>
                >   <br>
                >   static const struct pci_device_id pciidlist[] =
                {  #ifdef  <br>
                > CONFIG_DRM_AMDGPU_SI diff --git <br>
                > a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c <br>
                > b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c<br>
                > index d7523d1..8de3bd3 100644<br>
                > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c<br>
                > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c<br>
                > @@ -541,6 +541,12 @@ void
                amdgpu_fence_driver_force_completion(struct
                amdgpu_device *adev)<br>
                >        }<br>
                >   }<br>
                >   <br>
                > +void
                amdgpu_fence_driver_force_completion_ring(struct
                amdgpu_ring<br>
                > +*ring) {<br>
                > +     if (ring)<br>
                > +             amdgpu_fence_write(ring,
                ring->fence_drv.sync_seq); }<br>
                > +<br>
                <br>
                The coding style is completely off.<br>
                <br>
                Additional to that I don't think that this is a good
                idea. We should probably rather just signal all
                scheduler fences instead and don't touch the hardware
                fence at all.<br>
                <br>
                >   /*<br>
                >    * Common fence implementation<br>
                >    */<br>
                > diff --git
                a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h <br>
                > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h<br>
                > index 981ef08..03e88c6 100644<br>
                > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h<br>
                > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h<br>
                > @@ -76,6 +76,7 @@ struct amdgpu_fence_driver {  int
                <br>
                > amdgpu_fence_driver_init(struct amdgpu_device
                *adev);  void <br>
                > amdgpu_fence_driver_fini(struct amdgpu_device
                *adev);  void <br>
                > amdgpu_fence_driver_force_completion(struct
                amdgpu_device *adev);<br>
                > +void
                amdgpu_fence_driver_force_completion_ring(struct
                amdgpu_ring <br>
                > +*ring);<br>
                >   <br>
                >   int amdgpu_fence_driver_init_ring(struct
                amdgpu_ring *ring,<br>
                >                                  unsigned
                num_hw_submission);<br>
                > diff --git
                a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c <br>
                > b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c<br>
                > index 6f4e31f..4e97e6d 100644<br>
                > --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c<br>
                > +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c<br>
                > @@ -390,9 +390,18 @@ void
                amd_sched_hw_job_reset(struct amd_gpu_scheduler *sched)<br>
                >                                         
                &s_job->s_fence->cb)) {<br>
                >                       
                fence_put(s_job->s_fence->parent);<br>
                >                        s_job->s_fence->parent
                = NULL;<br>
                > +                    
                atomic_dec(&sched->hw_rq_count);<br>
                >                }<br>
                >        }<br>
                > -     atomic_set(&sched->hw_rq_count, 0);<br>
                > +     spin_unlock(&sched->job_list_lock);<br>
                > +}<br>
                > +<br>
                > +void amd_sched_job_kickout(struct amd_sched_job
                *s_job) {<br>
                > +     struct amd_gpu_scheduler *sched =
                s_job->sched;<br>
                > +<br>
                > +     spin_lock(&sched->job_list_lock);<br>
                > +     list_del_init(&s_job->node);<br>
                >        spin_unlock(&sched->job_list_lock);<br>
                >   }<br>
                >   <br>
                > diff --git
                a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h <br>
                > b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h<br>
                > index 8cb41d3..59694f3 100644<br>
                > --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h<br>
                > +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h<br>
                > @@ -81,6 +81,7 @@ struct amd_sched_job {<br>
                >        struct list_head                node;<br>
                >        struct delayed_work             work_tdr;<br>
                >        uint64_t                        id;<br>
                > +     atomic_t karma;<br>
                >   };<br>
                >   <br>
                >   extern const struct fence_ops
                amd_sched_fence_ops_scheduled; @@ -96,6 +97,11 @@ static
                inline struct amd_sched_fence *to_amd_sched_fence(struct
                fence *f)<br>
                >        return NULL;<br>
                >   }<br>
                >   <br>
                > +static inline bool amd_sched_invalidate_job(struct
                amd_sched_job <br>
                > +*s_job, int threshold) {<br>
                > +     return (s_job &&
                atomic_inc_return(&s_job->karma) > threshold);
                }<br>
                > +<br>
                <br>
                Again coding style is completely off.<br>
                <br>
                Christian.<br>
                <br>
                >   /**<br>
                >    * Define the backend operations called by the
                scheduler,<br>
                >    * these functions should be implemented in
                driver side @@ -158,4 +164,5 @@ int
                amd_sched_job_init(struct amd_sched_job *job,<br>
                >                       void *owner);<br>
                >   void amd_sched_hw_job_reset(struct
                amd_gpu_scheduler *sched);  void <br>
                > amd_sched_job_recovery(struct amd_gpu_scheduler
                *sched);<br>
                > +void amd_sched_job_kickout(struct amd_sched_job
                *s_job);<br>
                >   #endif<br>
                > --<br>
                > 2.7.4<br>
                ><br>
                > _______________________________________________<br>
                > amd-gfx mailing list<br>
                > <a moz-do-not-send="true"
                  href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
                > <a moz-do-not-send="true"
                  href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a><br>
                <br>
                <o:p></o:p></span></p>
          </div>
        </blockquote>
        <p><o:p> </o:p></p>
      </div>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>