<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">
      <blockquote type="cite"><font size="2"><span
            style="font-size:10pt;">[ML] Really not necessary, we have
            spin_lock to protect the mirror-list, nothing will be messed
            up ...<br>
          </span></font></blockquote>
      You are missing that it is entirely possible that the job will
      complete while we are trying to kick it out.<br>
      <br>
      <blockquote type="cite"><font size="2"><span
            style="font-size:10pt;">[ML] why don't touch hardware fence
            at all ?  the original/bare-metal gpu reset also signal all
            ring's hardware fence first, I just follow the original
            logic ...<br>
            Scheduler fence will be auto signaled after hw fence
            signaled, any problem with that ? what's the concern ?</span></font></blockquote>
      The hardware runs async to the CPU which tries to reset it, so we
      need to be careful in which order things are done.<br>
      <br>
      <blockquote type="cite"><font size="2"><span
            style="font-size:10pt;">
            [ML] No I don't think so, the kickout must be prior to the
            hw_job_reset, otherwise the scheduler fence callback will be
            removed and you need manually install it later , which is
            not correct:<br>
            For the guity job, we just kick it out before job reset, in
            job_reset we only reset other innocent jobs( and unbind the
            scheduler fence callback for them), after hw fence<br>
            Forcely set to drv_seq, all hw fence are signaled (this is
            the way of original logic, I didn't change that).  When go
            to sched_recovery(), it will recover all innocent job and
            hook<br>
            The scheduler fence with new hw fence.  That way only the
            guilty job is dropped forever.<br>
          </span></font></blockquote>
      Again same problem here.<br>
      <br>
      To be absolutely sure that everything works as expected we need to
      do it in the following order:<br>
      <br>
      1. Stop the scheduler from feeding more jobs to the hardware when
      a jobs completes.<br>
      <br>
      2. Then call hw_job_reset to remove the connection between job and
      hardware fence.<br>
      <br>
      3. Test if job is now completed. It is entirely possible that the
      job completed while we started the work to reset the hardware.<br>
      <br>
      Removing the connection between the hardware fence and the job is
      the deadline, if the job completes after that it is lost.<br>
      <br>
      4. Check the karma and kick out the job if we found it guilty.<br>
      <br>
      5. Get the whole stuff working again, e.g. reset the hardware,
      restart the scheduler etc...<br>
      <br>
      Regards,<br>
      Christian.<br>
      <br>
      Am 09.05.2017 um 04:45 schrieb Liu, Monk:<br>
    </div>
    <blockquote
cite="mid:DM5PR12MB1610CE19C4173C5764CACFA984EF0@DM5PR12MB1610.namprd12.prod.outlook.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <meta name="Generator" content="Microsoft Exchange Server">
      <!-- converted from text -->
      <style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
      <font size="2"><span style="font-size:10pt;">
          <div class="PlainText">>   <br>
            > -     /* block scheduler */<br>
            > -     for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
            > -             ring = adev->rings[i];<br>
            > +     /* we start from the ring trigger GPU hang */<br>
            > +     j = job ? job->ring->idx : 0;<br>
            > +<br>
            > +     if (job)<br>
            > +             if
            (amd_sched_invalidate_job(&job->base,
            amdgpu_job_hang_limit))<br>
            > +                    
            amd_sched_job_kickout(&job->base);<br>
            <br>
            Well that looks like the wrong order to me. We should
            probably stop the scheduler before trying to mess anything
            with the job.<br>
            <br>
            [ML] Really not necessary, we have spin_lock to protect the
            mirror-list, nothing will be messed up ...<br>
            <br>
            <br>
            >   <br>
            > +void amdgpu_fence_driver_force_completion_ring(struct
            amdgpu_ring<br>
            > +*ring) {<br>
            > +     if (ring)<br>
            > +             amdgpu_fence_write(ring,
            ring->fence_drv.sync_seq); }<br>
            > +<br>
            <br>
            The coding style is completely off.<br>
            <br>
            [ML] I don't know why at email side it looks wrong coding
            style, but I'm sure it is correct at my side, check here:<br>
            void amdgpu_fence_driver_force_completion_ring(struct
            amdgpu_ring *ring)<br>
            {<br>
                    if (ring)<br>
                            amdgpu_fence_write(ring,
            ring->fence_drv.sync_seq);<br>
            }<br>
            <br>
            Additional to that I don't think that this is a good idea.
            We should probably rather just signal all scheduler fences
            instead and don't touch the hardware fence at all.<br>
            <br>
            [ML] why don't touch hardware fence at all ?  the
            original/bare-metal gpu reset also signal all ring's
            hardware fence first, I just follow the original logic ...<br>
            Scheduler fence will be auto signaled after hw fence
            signaled, any problem with that ? what's the concern ?<br>
            <br>
            <br>
            > -     for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
            > -             struct amdgpu_ring *ring =
            adev->rings[i];<br>
            > +     for (i = j; i < j + AMDGPU_MAX_RINGS; ++i) {<br>
            > +             ring = adev->rings[i %
            AMDGPU_MAX_RINGS];<br>
            >                if (!ring || !ring->sched.thread)<br>
            >                        continue;<br>
            >   <br>
            > +             if (job && j != i) {<br>
            > +                    
            kthread_unpark(ring->sched.thread);<br>
            > +                     continue;<br>
            > +             }<br>
            > +<br>
            <br>
            Please split up that patch a bit further. E.g. first the
            handling to only hw_job_reset the ring in question, then the
            kickout handling.<br>
            <br>
            [ML] No I don't think so, the kickout must be prior to the
            hw_job_reset, otherwise the scheduler fence callback will be
            removed and you need manually install it later , which is
            not correct:<br>
            For the guity job, we just kick it out before job reset, in
            job_reset we only reset other innocent jobs( and unbind the
            scheduler fence callback for them), after hw fence<br>
            Forcely set to drv_seq, all hw fence are signaled (this is
            the way of original logic, I didn't change that).  When go
            to sched_recovery(), it will recover all innocent job and
            hook<br>
            The scheduler fence with new hw fence.  That way only the
            guilty job is dropped forever.<br>
            <br>
            -----Original Message-----<br>
            From: Christian König [<a moz-do-not-send="true"
              href="mailto:deathsimple@vodafone.de">mailto:deathsimple@vodafone.de</a>]
            <br>
            Sent: Monday, May 08, 2017 9:12 PM<br>
            To: Liu, Monk <a class="moz-txt-link-rfc2396E" href="mailto:Monk.Liu@amd.com"><Monk.Liu@amd.com></a>;
            <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
            Cc: Koenig, Christian <a class="moz-txt-link-rfc2396E" href="mailto:Christian.Koenig@amd.com"><Christian.Koenig@amd.com></a><br>
            Subject: Re: [PATCH 4/4] drm/amdgpu/SRIOV:implement guilty
            job TDR (V2)<br>
            <br>
            Am 08.05.2017 um 09:01 schrieb Liu, Monk:<br>
            > @Christian<br>
            ><br>
            > This one is changed to guilty job scheme accordingly
            with your <br>
            > response<br>
            ><br>
            > BR Monk<br>
            ><br>
            > -----Original Message-----<br>
            > From: Monk Liu [<a moz-do-not-send="true"
              href="mailto:Monk.Liu@amd.com">mailto:Monk.Liu@amd.com</a>]<br>
            > Sent: Monday, May 08, 2017 3:00 PM<br>
            > To: <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
            > Cc: Liu, Monk <a class="moz-txt-link-rfc2396E" href="mailto:Monk.Liu@amd.com"><Monk.Liu@amd.com></a><br>
            > Subject: [PATCH] drm/amdgpu/SRIOV:implement guilty job
            TDR (V2)<br>
            ><br>
            > 1,TDR will kickout guilty job if it hang exceed the
            threshold of the given one from kernel paramter
            "job_hang_limit", that way a bad command stream will not
            infinitly cause GPU hang.<br>
            ><br>
            > by default this threshold is 1 so a job will be kicked
            out after it hang.<br>
            ><br>
            > 2,if a job timeout TDR routine will not reset all
            sched/ring, instead if will only reset on the givn one which
            is indicated by @job of amdgpu_sriov_gpu_reset, that way we
            don't need to reset and recover each sched/ring if we
            already know which job cause GPU hang.<br>
            ><br>
            > 3,unblock sriov_gpu_reset for AI family.<br>
            > 4,don't init entity for KIQ<br>
            ><br>
            > TODO:<br>
            > when a job is considered as guilty, we should mark some
            flag in its fence status flag, and let UMD side aware that
            this fence signaling is not due to job complete but job
            hang.<br>
            ><br>
            > Change-Id: I7b89c19a3de93249db570d0a80522176b1525a09<br>
            > Signed-off-by: Monk Liu <a class="moz-txt-link-rfc2396E" href="mailto:Monk.Liu@amd.com"><Monk.Liu@amd.com></a><br>
            > ---<br>
            >   drivers/gpu/drm/amd/amdgpu/amdgpu.h           |  1 +<br>
            >   drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c       |  4
            +++<br>
            >   drivers/gpu/drm/amd/amdgpu/amdgpu_device.c    | 36
            ++++++++++++++++++++-------<br>
            >   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c       |  4
            +++<br>
            >   drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c     |  6
            +++++<br>
            >   drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h      |  1 +<br>
            >   drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 11
            +++++++-  drivers/gpu/drm/amd/scheduler/gpu_scheduler.h |  7
            ++++++<br>
            >   8 files changed, 60 insertions(+), 10 deletions(-)<br>
            ><br>
            > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h <br>
            > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h<br>
            > index 90a69bf..93bcea2 100644<br>
            > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h<br>
            > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h<br>
            > @@ -111,6 +111,7 @@ extern int amdgpu_prim_buf_per_se; 
            extern int <br>
            > amdgpu_pos_buf_per_se;  extern int
            amdgpu_cntl_sb_buf_per_se;  extern <br>
            > int amdgpu_param_buf_per_se;<br>
            > +extern int amdgpu_job_hang_limit;<br>
            >   <br>
            >   #define AMDGPU_DEFAULT_GTT_SIZE_MB          3072ULL
            /* 3GB by default */<br>
            >   #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS             
            3000<br>
            > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c <br>
            > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
            > index b4bbbb3..23afc58 100644<br>
            > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
            > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c<br>
            > @@ -52,6 +52,10 @@ static int amdgpu_ctx_init(struct
            amdgpu_device *adev, struct amdgpu_ctx *ctx)<br>
            >                struct amd_sched_rq *rq;<br>
            >   <br>
            >                rq =
            &ring->sched.sched_rq[AMD_SCHED_PRIORITY_NORMAL];<br>
            > +<br>
            > +             if (ring == &adev->gfx.kiq.ring)<br>
            > +                     continue;<br>
            > +<br>
            <br>
            That looks like a bug fix and should probably go into a
            separate patch.<br>
            <br>
            >                r =
            amd_sched_entity_init(&ring->sched,
            &ctx->rings[i].entity,<br>
            >                                          rq,
            amdgpu_sched_jobs);<br>
            >                if (r)<br>
            > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
            <br>
            > b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c<br>
            > index 0e5f314..f3990fe 100644<br>
            > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c<br>
            > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c<br>
            > @@ -2537,7 +2537,7 @@ static int
            amdgpu_recover_vram_from_shadow(struct amdgpu_device *adev,<br>
            >    */<br>
            >   int amdgpu_sriov_gpu_reset(struct amdgpu_device
            *adev, struct amdgpu_job *job)  {<br>
            > -     int i, r = 0;<br>
            > +     int i, j, r = 0;<br>
            >        int resched;<br>
            >        struct amdgpu_bo *bo, *tmp;<br>
            >        struct amdgpu_ring *ring;<br>
            > @@ -2550,19 +2550,30 @@ int
            amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct
            amdgpu_job *job)<br>
            >        /* block TTM */<br>
            >        resched =
            ttm_bo_lock_delayed_workqueue(&adev->mman.bdev);<br>
            >   <br>
            > -     /* block scheduler */<br>
            > -     for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
            > -             ring = adev->rings[i];<br>
            > +     /* we start from the ring trigger GPU hang */<br>
            > +     j = job ? job->ring->idx : 0;<br>
            > +<br>
            > +     if (job)<br>
            > +             if
            (amd_sched_invalidate_job(&job->base,
            amdgpu_job_hang_limit))<br>
            > +                    
            amd_sched_job_kickout(&job->base);<br>
            <br>
            Well that looks like the wrong order to me. We should
            probably stop the scheduler before trying to mess anything
            with the job.<br>
            <br>
            >   <br>
            > +     /* block scheduler */<br>
            > +     for (i = j; i < j + AMDGPU_MAX_RINGS; ++i) {<br>
            > +             ring = adev->rings[i %
            AMDGPU_MAX_RINGS];<br>
            >                if (!ring || !ring->sched.thread)<br>
            >                        continue;<br>
            >   <br>
            >                kthread_park(ring->sched.thread);<br>
            > +<br>
            > +             if (job && j != i)<br>
            > +                     continue;<br>
            > +<br>
            > +             /* only do job_reset on the hang ring if
            @job not NULL */<br>
            >               
            amd_sched_hw_job_reset(&ring->sched);<br>
            > -     }<br>
            >   <br>
            > -     /* after all hw jobs are reset, hw fence is
            meaningless, so force_completion */<br>
            > -     amdgpu_fence_driver_force_completion(adev);<br>
            > +             /* after all hw jobs are reset, hw fence
            is meaningless, so force_completion */<br>
            > +            
            amdgpu_fence_driver_force_completion_ring(ring);<br>
            > +     }<br>
            >   <br>
            >        /* request to take full control of GPU before
            re-initialization  */<br>
            >        if (job)<br>
            > @@ -2615,11 +2626,16 @@ int
            amdgpu_sriov_gpu_reset(struct amdgpu_device *adev, struct
            amdgpu_job *job)<br>
            >        }<br>
            >        fence_put(fence);<br>
            >   <br>
            > -     for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
            > -             struct amdgpu_ring *ring =
            adev->rings[i];<br>
            > +     for (i = j; i < j + AMDGPU_MAX_RINGS; ++i) {<br>
            > +             ring = adev->rings[i %
            AMDGPU_MAX_RINGS];<br>
            >                if (!ring || !ring->sched.thread)<br>
            >                        continue;<br>
            >   <br>
            > +             if (job && j != i) {<br>
            > +                    
            kthread_unpark(ring->sched.thread);<br>
            > +                     continue;<br>
            > +             }<br>
            > +<br>
            <br>
            Please split up that patch a bit further. E.g. first the
            handling to only hw_job_reset the ring in question, then the
            kickout handling.<br>
            <br>
            >               
            amd_sched_job_recovery(&ring->sched);<br>
            >                kthread_unpark(ring->sched.thread);<br>
            >        }<br>
            > @@ -2629,6 +2645,8 @@ int amdgpu_sriov_gpu_reset(struct
            amdgpu_device *adev, struct amdgpu_job *job)<br>
            >        if (r) {<br>
            >                /* bad news, how to tell it to userspace
            ? */<br>
            >                dev_info(adev->dev, "GPU reset
            failed\n");<br>
            > +     } else {<br>
            > +             dev_info(adev->dev, "GPU reset
            successed!\n");<br>
            >        }<br>
            >   <br>
            >        adev->gfx.in_reset = false;<br>
            > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c <br>
            > b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c<br>
            > index 416908a..fd3691a8 100644<br>
            > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c<br>
            > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c<br>
            > @@ -112,6 +112,7 @@ int amdgpu_prim_buf_per_se = 0; 
            int <br>
            > amdgpu_pos_buf_per_se = 0;  int
            amdgpu_cntl_sb_buf_per_se = 0;  int <br>
            > amdgpu_param_buf_per_se = 0;<br>
            > +int amdgpu_job_hang_limit = 0;<br>
            >   <br>
            >   MODULE_PARM_DESC(vramlimit, "Restrict VRAM for
            testing, in <br>
            > megabytes");  module_param_named(vramlimit,
            amdgpu_vram_limit, int, <br>
            > 0600); @@ -237,6 +238,9 @@
            module_param_named(cntl_sb_buf_per_se, <br>
            > amdgpu_cntl_sb_buf_per_se, int, 0444);  <br>
            > MODULE_PARM_DESC(param_buf_per_se, "the size of
            Off-Chip Pramater <br>
            > Cache per Shader Engine (default depending on gfx)"); 
            <br>
            > module_param_named(param_buf_per_se,
            amdgpu_param_buf_per_se, int, <br>
            > 0444);<br>
            >   <br>
            > +MODULE_PARM_DESC(job_hang_limit, "how much time allow
            a job hang and <br>
            > +not drop it (default 0)");
            module_param_named(job_hang_limit,<br>
            > +amdgpu_job_hang_limit, int ,0444);<br>
            > +<br>
            >   <br>
            >   static const struct pci_device_id pciidlist[] = { 
            #ifdef  <br>
            > CONFIG_DRM_AMDGPU_SI diff --git <br>
            > a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c <br>
            > b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c<br>
            > index d7523d1..8de3bd3 100644<br>
            > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c<br>
            > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c<br>
            > @@ -541,6 +541,12 @@ void
            amdgpu_fence_driver_force_completion(struct amdgpu_device
            *adev)<br>
            >        }<br>
            >   }<br>
            >   <br>
            > +void amdgpu_fence_driver_force_completion_ring(struct
            amdgpu_ring<br>
            > +*ring) {<br>
            > +     if (ring)<br>
            > +             amdgpu_fence_write(ring,
            ring->fence_drv.sync_seq); }<br>
            > +<br>
            <br>
            The coding style is completely off.<br>
            <br>
            Additional to that I don't think that this is a good idea.
            We should probably rather just signal all scheduler fences
            instead and don't touch the hardware fence at all.<br>
            <br>
            >   /*<br>
            >    * Common fence implementation<br>
            >    */<br>
            > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h <br>
            > b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h<br>
            > index 981ef08..03e88c6 100644<br>
            > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h<br>
            > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h<br>
            > @@ -76,6 +76,7 @@ struct amdgpu_fence_driver {  int <br>
            > amdgpu_fence_driver_init(struct amdgpu_device *adev); 
            void <br>
            > amdgpu_fence_driver_fini(struct amdgpu_device *adev); 
            void <br>
            > amdgpu_fence_driver_force_completion(struct
            amdgpu_device *adev);<br>
            > +void amdgpu_fence_driver_force_completion_ring(struct
            amdgpu_ring <br>
            > +*ring);<br>
            >   <br>
            >   int amdgpu_fence_driver_init_ring(struct amdgpu_ring
            *ring,<br>
            >                                  unsigned
            num_hw_submission);<br>
            > diff --git
            a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c <br>
            > b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c<br>
            > index 6f4e31f..4e97e6d 100644<br>
            > --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c<br>
            > +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c<br>
            > @@ -390,9 +390,18 @@ void amd_sched_hw_job_reset(struct
            amd_gpu_scheduler *sched)<br>
            >                                         
            &s_job->s_fence->cb)) {<br>
            >                       
            fence_put(s_job->s_fence->parent);<br>
            >                        s_job->s_fence->parent =
            NULL;<br>
            > +                    
            atomic_dec(&sched->hw_rq_count);<br>
            >                }<br>
            >        }<br>
            > -     atomic_set(&sched->hw_rq_count, 0);<br>
            > +     spin_unlock(&sched->job_list_lock);<br>
            > +}<br>
            > +<br>
            > +void amd_sched_job_kickout(struct amd_sched_job
            *s_job) {<br>
            > +     struct amd_gpu_scheduler *sched =
            s_job->sched;<br>
            > +<br>
            > +     spin_lock(&sched->job_list_lock);<br>
            > +     list_del_init(&s_job->node);<br>
            >        spin_unlock(&sched->job_list_lock);<br>
            >   }<br>
            >   <br>
            > diff --git
            a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h <br>
            > b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h<br>
            > index 8cb41d3..59694f3 100644<br>
            > --- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h<br>
            > +++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h<br>
            > @@ -81,6 +81,7 @@ struct amd_sched_job {<br>
            >        struct list_head                node;<br>
            >        struct delayed_work             work_tdr;<br>
            >        uint64_t                        id;<br>
            > +     atomic_t karma;<br>
            >   };<br>
            >   <br>
            >   extern const struct fence_ops
            amd_sched_fence_ops_scheduled; @@ -96,6 +97,11 @@ static
            inline struct amd_sched_fence *to_amd_sched_fence(struct
            fence *f)<br>
            >        return NULL;<br>
            >   }<br>
            >   <br>
            > +static inline bool amd_sched_invalidate_job(struct
            amd_sched_job <br>
            > +*s_job, int threshold) {<br>
            > +     return (s_job &&
            atomic_inc_return(&s_job->karma) > threshold); }<br>
            > +<br>
            <br>
            Again coding style is completely off.<br>
            <br>
            Christian.<br>
            <br>
            >   /**<br>
            >    * Define the backend operations called by the
            scheduler,<br>
            >    * these functions should be implemented in driver
            side @@ -158,4 +164,5 @@ int amd_sched_job_init(struct
            amd_sched_job *job,<br>
            >                       void *owner);<br>
            >   void amd_sched_hw_job_reset(struct amd_gpu_scheduler
            *sched);  void <br>
            > amd_sched_job_recovery(struct amd_gpu_scheduler
            *sched);<br>
            > +void amd_sched_job_kickout(struct amd_sched_job
            *s_job);<br>
            >   #endif<br>
            > --<br>
            > 2.7.4<br>
            ><br>
            > _______________________________________________<br>
            > amd-gfx mailing list<br>
            > <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
            > <a moz-do-not-send="true"
              href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a><br>
            <br>
            <br>
          </div>
        </span></font>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>