<html>
  <head>
    <meta content="text/html; charset=utf-8" http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">Ah, in this case please separate the
      amdgpu_vm_bo_rmv() from setting csa_addr to NULL.<br>
      <br>
      Cause amdgpu_vm_bo_rmv() should come before amdgpu_vm_fini() and
      that in turn should become before waiting for the scheduler so
      that the MM knows that the memory is about to be freed.<br>
      <br>
      Regards,<br>
      Christian.<br>
      <br>
      Am 13.01.2017 um 10:56 schrieb Liu, Monk:<br>
    </div>
    <blockquote
cite="mid:BY2PR1201MB11102468908DDFD083C45B5384780@BY2PR1201MB1110.namprd12.prod.outlook.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
      <meta name="Generator" content="Microsoft Exchange Server">
      <!-- converted from text -->
      <style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
      <meta content="text/html; charset=UTF-8">
      <style type="text/css" style="">
<!--
p
        {margin-top:0;
        margin-bottom:0}
-->
</style>
      <div dir="ltr">
        <div id="x_divtagdefaultwrapper" dir="ltr"
          style="font-size:12pt; color:#000000;
          font-family:Calibri,Arial,Helvetica,sans-serif">
          <p>only with amdgpu_vm_bo_rmv() won't has such bug, but in
            another branch for sriov, we not only call vm_bo_rmv(), and
            we also set csa_addr to NULL after it, so the NULL address
            is inserted in RB, and when preemption occured, CP backup
            snapshot to NULL address.</p>
          <p><br>
          </p>
          <p>although in staging-4.9 we didn't set csa_addr to NULL
            (because as you suggested we always use HARDCODE/MACRO for
            CSA address), but logically we'd better put CSA unmapping
            stuffs behind "sched_entity_fini", which is more reasonable
            ...</p>
          <p><br>
          </p>
          <p>BR Monk<br>
          </p>
        </div>
        <hr tabindex="-1" style="display:inline-block; width:98%">
        <div id="x_divRplyFwdMsg" dir="ltr"><font style="font-size:11pt"
            color="#000000" face="Calibri, sans-serif"><b>发件人:</b>
            amd-gfx <a class="moz-txt-link-rfc2396E" href="mailto:amd-gfx-bounces@lists.freedesktop.org"><amd-gfx-bounces@lists.freedesktop.org></a> 代表
            Christian König <a class="moz-txt-link-rfc2396E" href="mailto:deathsimple@vodafone.de"><deathsimple@vodafone.de></a><br>
            <b>发送时间:</b> 2017年1月13日 17:25:09<br>
            <b>收件人:</b> Liu, Monk; <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
            <b>主题:</b> Re: [PATCH] drm/amdgpu:put CSA unmap after
            sched_entity_fini</font>
          <div> </div>
        </div>
      </div>
      <font size="2"><span style="font-size:10pt;">
          <div class="PlainText">Am 13.01.2017 um 05:11 schrieb Monk
            Liu:<br>
            > otherwise CSA may unmapped before gpu_scheduler
            scheduling<br>
            > jobs and trigger VM fault on CSA address<br>
            ><br>
            > Change-Id: Ib2e25ededf89bca44c764477dd2f9127024ca78c<br>
            > Signed-off-by: Monk Liu <a class="moz-txt-link-rfc2396E" href="mailto:Monk.Liu@amd.com"><Monk.Liu@amd.com></a><br>
            <br>
            Did you really run into an issue because of that?<br>
            <br>
            Calling amdgpu_vm_bo_rmv() shouldn't affect the page tables
            nor already <br>
            submitted command submissions in any way.<br>
            <br>
            Regards,<br>
            Christian.<br>
            <br>
            > ---<br>
            >   drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c | 8 --------<br>
            >   drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c  | 8 ++++++++<br>
            >   2 files changed, 8 insertions(+), 8 deletions(-)<br>
            ><br>
            > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
            b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c<br>
            > index 45484c0..e13cdde 100644<br>
            > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c<br>
            > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c<br>
            > @@ -694,14 +694,6 @@ void
            amdgpu_driver_postclose_kms(struct drm_device *dev,<br>
            >        amdgpu_uvd_free_handles(adev, file_priv);<br>
            >        amdgpu_vce_free_handles(adev, file_priv);<br>
            >   <br>
            > -     if (amdgpu_sriov_vf(adev)) {<br>
            > -             /* TODO: how to handle reserve failure */<br>
            > -            
            BUG_ON(amdgpu_bo_reserve(adev->virt.csa_obj, false));<br>
            > -             amdgpu_vm_bo_rmv(adev,
            fpriv->vm.csa_bo_va);<br>
            > -             fpriv->vm.csa_bo_va = NULL;<br>
            > -            
            amdgpu_bo_unreserve(adev->virt.csa_obj);<br>
            > -     }<br>
            > -<br>
            >        amdgpu_vm_fini(adev, &fpriv->vm);<br>
            >   <br>
            >       
            idr_for_each_entry(&fpriv->bo_list_handles, list,
            handle)<br>
            > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
            b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
            > index d05546e..94098bc 100644<br>
            > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
            > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
            > @@ -1608,6 +1608,14 @@ void amdgpu_vm_fini(struct
            amdgpu_device *adev, struct amdgpu_vm *vm)<br>
            >   <br>
            >        amd_sched_entity_fini(vm->entity.sched,
            &vm->entity);<br>
            >   <br>
            > +     if (amdgpu_sriov_vf(adev)) {<br>
            > +             /* TODO: how to handle reserve failure */<br>
            > +            
            BUG_ON(amdgpu_bo_reserve(adev->virt.csa_obj, false));<br>
            > +             amdgpu_vm_bo_rmv(adev, vm->csa_bo_va);<br>
            > +             vm->csa_bo_va = NULL;<br>
            > +            
            amdgpu_bo_unreserve(adev->virt.csa_obj);<br>
            > +     }<br>
            > +<br>
            >        if (!RB_EMPTY_ROOT(&vm->va)) {<br>
            >                dev_err(adev->dev, "still active bo
            inside vm\n");<br>
            >        }<br>
            <br>
            <br>
            _______________________________________________<br>
            amd-gfx mailing list<br>
            <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
            <a moz-do-not-send="true"
              href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a><br>
          </div>
        </span></font>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
amd-gfx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a>
</pre>
    </blockquote>
    <p><br>
    </p>
  </body>
</html>