<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
Am 06.11.23 um 08:56 schrieb Tatsuyuki Ishi:<br>
<blockquote type="cite" cite="mid:7BE47209-2D0C-4E21-8CFB-418D5FA4759C@gmail.com">
<br>
<div>
<blockquote type="cite">
<div>On Oct 31, 2023, at 23:07, Christian König
<a class="moz-txt-link-rfc2396E" href="mailto:christian.koenig@amd.com"><christian.koenig@amd.com></a> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<div> Am 31.10.23 um 14:59 schrieb Bas Nieuwenhuizen:<br>
<blockquote type="cite" cite="mid:CAP+8YyFEKDGPCvA-puUBHNXcrEX4rXOJz=WkBpJyJrmqZ=rtMA@mail.gmail.com">
<div dir="ltr">
<div dir="ltr"><br>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Tue, Oct 31,
2023 at 2:57 PM Christian König <<a href="mailto:christian.koenig@amd.com" moz-do-not-send="true" class="moz-txt-link-freetext">christian.koenig@amd.com</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Am
31.10.23 um 14:40 schrieb Tatsuyuki Ishi:<br>
> The current amdgpu_gem_va_update_vm only
tries to perform updates for the<br>
> BO specified in the GEM ioctl; however, when
a binding is split, the<br>
> adjacent bindings also need to be updated.
Such updates currently ends up<br>
> getting deferred until next submission which
causes stalls.<br>
<br>
Yeah, that is a necessity. The hardware simply
doesn't support what you <br>
try to do here in all cases.<br>
</blockquote>
<div><br>
</div>
<div>What can the hardware not do here? Is this just
needing to wait for TLB flushes before we can free
pagetables, can we just delay that?</div>
</div>
</div>
</blockquote>
<br>
On some hardware generations (especially Navi1x, but also
everything older than Polaris) you can't invalidate the
TLB while it is in use.<br>
<br>
For Polaris and older it just means that you don't have a
guarantee that the shader can't access the memory any
more. So delaying the free operation helps here.<br>
<br>
But for Navi1x it's a workaround for a hardware bug. If
you try to invalidate the TLB while it is in use you can
potentially triggering memory accesses to random
addresses.<br>
<br>
That's why we still delay TLB invalidation's to the next
CS and use a new VMID for each submission instead of
invalidating the old one.<br>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Thanks for the information. I looked into the VMID
allocation logic and I see that if concurrent_flush is false,
then we defer any flush (or VMID reuse that requires a flush)
until that VMID is idle.</div>
<div><br>
</div>
<div>What patch #3 ends up doing is just performing the PT
update right away. Any required TLB update is deferred by
amdgpu_vm_update_range through the increment of tlb_seq, and
amdgpu_vmid.c is responsible for doing the actual TLB flush in
a manner that does not trigger the bug.</div>
<div><br>
</div>
<div>Can you confirm that this would be fine for the current
hardware?</div>
</div>
</blockquote>
<br>
Yeah, that should work. I'm just think about the UAPI a bit, we
should probably improve this to use something like a drm_syncobj
instead of just a flag to be future prove.<br>
<br>
Christian.<br>
<br>
<blockquote type="cite" cite="mid:7BE47209-2D0C-4E21-8CFB-418D5FA4759C@gmail.com">
<div>
<div><br>
</div>
<div>Tatsuyuki.</div>
<br>
<blockquote type="cite">
<div>
<div> <br>
I'm currently working on changing that for Navi2x and
newer (maybe Vega as well), but this is something you can
really only do on some hw generations after validating
that it works.<br>
<br>
Regards,<br>
Christian. <br>
<br>
<blockquote type="cite" cite="mid:CAP+8YyFEKDGPCvA-puUBHNXcrEX4rXOJz=WkBpJyJrmqZ=rtMA@mail.gmail.com">
<div dir="ltr">
<div class="gmail_quote">
<div> <br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
So this approach won't work in general.<br>
<br>
Regards,<br>
Christian.<br>
<br>
><br>
> Introduce a new state "dirty", shared between
per-VM BOs and traditional<br>
> BOs, containing all BOs that have pending
updates in `invalids`.<br>
> amdgpu_gem_va_update_vm will now simply flush
any pending updates for BOs<br>
> in the dirty state.<br>
><br>
> Signed-off-by: Tatsuyuki Ishi <<a href="mailto:ishitatsuyuki@gmail.com" target="_blank" moz-do-not-send="true" class="moz-txt-link-freetext">ishitatsuyuki@gmail.com</a>><br>
> ---<br>
> drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c |
18 ++++---<br>
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c |
66 ++++++++++++++++++-------<br>
> drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h |
3 ++<br>
> 3 files changed, 63 insertions(+), 24
deletions(-)<br>
><br>
> diff --git
a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c<br>
> index a1b15d0d6c48..01d3a97248b0 100644<br>
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c<br>
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c<br>
> @@ -604,10 +604,9 @@ int
amdgpu_gem_metadata_ioctl(struct drm_device *dev,
void *data,<br>
> * vital here, so they are not reported
back to userspace.<br>
> */<br>
> static void amdgpu_gem_va_update_vm(struct
amdgpu_device *adev,<br>
> - struct
amdgpu_vm *vm,<br>
> - struct
amdgpu_bo_va *bo_va,<br>
> - uint32_t
operation)<br>
> + struct
amdgpu_vm *vm)<br>
> {<br>
> + struct amdgpu_bo_va *bo_va;<br>
> int r;<br>
> <br>
> if (!amdgpu_vm_ready(vm))<br>
> @@ -617,12 +616,18 @@ static void
amdgpu_gem_va_update_vm(struct amdgpu_device
*adev,<br>
> if (r)<br>
> goto error;<br>
> <br>
> - if (operation == AMDGPU_VA_OP_MAP ||<br>
> - operation == AMDGPU_VA_OP_REPLACE)
{<br>
> + spin_lock(&vm->status_lock);<br>
> + while (!list_empty(&vm->dirty))
{<br>
> + bo_va =
list_first_entry(&vm->dirty, struct
amdgpu_bo_va,<br>
> +
base.vm_status);<br>
> +
spin_unlock(&vm->status_lock);<br>
> +<br>
> r = amdgpu_vm_bo_update(adev,
bo_va, false);<br>
> if (r)<br>
> goto error;<br>
> +
spin_lock(&vm->status_lock);<br>
> }<br>
> + spin_unlock(&vm->status_lock);<br>
> <br>
> r = amdgpu_vm_update_pdes(adev, vm,
false);<br>
> <br>
> @@ -792,8 +797,7 @@ int
amdgpu_gem_va_ioctl(struct drm_device *dev, void
*data,<br>
> break;<br>
> }<br>
> if (!r && !(args->flags
& AMDGPU_VM_DELAY_UPDATE) &&
!amdgpu_vm_debug)<br>
> - amdgpu_gem_va_update_vm(adev,
&fpriv->vm, bo_va,<br>
> -
args->operation);<br>
> + amdgpu_gem_va_update_vm(adev,
&fpriv->vm);<br>
> <br>
> error:<br>
> drm_exec_fini(&exec);<br>
> diff --git
a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
> index dd6f72e2a1d6..01d31891cd05 100644<br>
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c<br>
> @@ -191,6 +191,21 @@ static void
amdgpu_vm_bo_set_evicted(struct amdgpu_vm_bo_base
*vm_bo, bool evict<br>
>
spin_unlock(&vm_bo->vm->status_lock);<br>
> }<br>
> <br>
> +/**<br>
> + * amdgpu_vm_bo_dirty - vm_bo is dirty<br>
> + *<br>
> + * @vm_bo: vm_bo which is dirty<br>
> + *<br>
> + * State for normal and per VM BOs that are
not moved, but have new entries in<br>
> + * bo_va->invalids.<br>
> + */<br>
> +static void amdgpu_vm_bo_dirty(struct
amdgpu_vm_bo_base *vm_bo)<br>
> +{<br>
> +
spin_lock(&vm_bo->vm->status_lock);<br>
> + list_move(&vm_bo->vm_status,
&vm_bo->vm->dirty);<br>
> +
spin_unlock(&vm_bo->vm->status_lock);<br>
> +}<br>
> +<br>
> /**<br>
> * amdgpu_vm_bo_moved - vm_bo is moved<br>
> *<br>
> @@ -1042,6 +1057,9 @@ void
amdgpu_vm_get_memory(struct amdgpu_vm *vm,<br>
> list_for_each_entry_safe(bo_va, tmp,
&vm->evicted, base.eviction_status)<br>
> amdgpu_vm_bo_get_memory(bo_va,
stats);<br>
> <br>
> + list_for_each_entry_safe(bo_va, tmp,
&vm->dirty, base.vm_status)<br>
> + amdgpu_vm_bo_get_memory(bo_va,
stats);<br>
> +<br>
> list_for_each_entry_safe(bo_va, tmp,
&vm->relocated, base.vm_status)<br>
> amdgpu_vm_bo_get_memory(bo_va,
stats);<br>
> <br>
> @@ -1411,6 +1429,17 @@ int
amdgpu_vm_handle_moved(struct amdgpu_device *adev,<br>
> dma_resv_unlock(resv);<br>
>
spin_lock(&vm->status_lock);<br>
> }<br>
> +<br>
> + while (!list_empty(&vm->dirty))
{<br>
> + bo_va =
list_first_entry(&vm->dirty, struct
amdgpu_bo_va,<br>
> +
base.vm_status);<br>
> +
spin_unlock(&vm->status_lock);<br>
> +<br>
> + r = amdgpu_vm_bo_update(adev,
bo_va, false);<br>
> + if (r)<br>
> + return r;<br>
> +
spin_lock(&vm->status_lock);<br>
> + }<br>
> spin_unlock(&vm->status_lock);<br>
> <br>
> return 0;<br>
> @@ -1476,19 +1505,16 @@ static void
amdgpu_vm_bo_insert_map(struct amdgpu_device
*adev,<br>
> struct
amdgpu_bo_va_mapping *mapping)<br>
> {<br>
> struct amdgpu_vm *vm =
bo_va->base.vm;<br>
> - struct amdgpu_bo *bo = bo_va-><a href="http://base.bo/" rel="noreferrer" target="_blank" moz-do-not-send="true">base.bo</a>;<br>
> <br>
> mapping->bo_va = bo_va;<br>
> list_add(&mapping->list,
&bo_va->invalids);<br>
> amdgpu_vm_it_insert(mapping,
&vm->va);<br>
> + if (!bo_va->base.moved)<br>
> +
amdgpu_vm_bo_dirty(&bo_va->base);<br>
> <br>
> if (mapping->flags &
AMDGPU_PTE_PRT)<br>
> amdgpu_vm_prt_get(adev);<br>
> <br>
> - if (bo && bo->tbo.base.resv
== vm->root.bo->tbo.base.resv &&<br>
> - !bo_va->base.moved) {<br>
> -
amdgpu_vm_bo_moved(&bo_va->base);<br>
> - }<br>
> trace_amdgpu_vm_bo_map(bo_va, mapping);<br>
> }<br>
> <br>
> @@ -1725,6 +1751,8 @@ int
amdgpu_vm_bo_clear_mappings(struct amdgpu_device
*adev,<br>
> before->flags =
tmp->flags;<br>
> before->bo_va =
tmp->bo_va;<br>
>
list_add(&before->list,
&tmp->bo_va->invalids);<br>
> + if
(!tmp->bo_va->base.moved)<br>
> +
amdgpu_vm_bo_dirty(&tmp->bo_va->base);<br>
> }<br>
> <br>
> /* Remember mapping split at
the end */<br>
> @@ -1736,6 +1764,8 @@ int
amdgpu_vm_bo_clear_mappings(struct amdgpu_device
*adev,<br>
> after->flags =
tmp->flags;<br>
> after->bo_va =
tmp->bo_va;<br>
>
list_add(&after->list,
&tmp->bo_va->invalids);<br>
> + if
(!tmp->bo_va->base.moved)<br>
> +
amdgpu_vm_bo_dirty(&tmp->bo_va->base);<br>
> }<br>
> <br>
> list_del(&tmp->list);<br>
> @@ -1761,30 +1791,18 @@ int
amdgpu_vm_bo_clear_mappings(struct amdgpu_device
*adev,<br>
> <br>
> /* Insert partial mapping before the
range */<br>
> if (!list_empty(&before->list))
{<br>
> - struct amdgpu_bo *bo =
before->bo_va-><a href="http://base.bo/" rel="noreferrer" target="_blank" moz-do-not-send="true">base.bo</a>;<br>
> -<br>
> amdgpu_vm_it_insert(before,
&vm->va);<br>
> if (before->flags &
AMDGPU_PTE_PRT)<br>
>
amdgpu_vm_prt_get(adev);<br>
> -<br>
> - if (bo &&
bo->tbo.base.resv ==
vm->root.bo->tbo.base.resv &&<br>
> -
!before->bo_va->base.moved)<br>
> -
amdgpu_vm_bo_moved(&before->bo_va->base);<br>
> } else {<br>
> kfree(before);<br>
> }<br>
> <br>
> /* Insert partial mapping after the
range */<br>
> if (!list_empty(&after->list)) {<br>
> - struct amdgpu_bo *bo =
after->bo_va-><a href="http://base.bo/" rel="noreferrer" target="_blank" moz-do-not-send="true">base.bo</a>;<br>
> -<br>
> amdgpu_vm_it_insert(after,
&vm->va);<br>
> if (after->flags &
AMDGPU_PTE_PRT)<br>
>
amdgpu_vm_prt_get(adev);<br>
> -<br>
> - if (bo &&
bo->tbo.base.resv ==
vm->root.bo->tbo.base.resv &&<br>
> -
!after->bo_va->base.moved)<br>
> -
amdgpu_vm_bo_moved(&after->bo_va->base);<br>
> } else {<br>
> kfree(after);<br>
> }<br>
> @@ -2136,6 +2154,7 @@ int
amdgpu_vm_init(struct amdgpu_device *adev, struct
amdgpu_vm *vm, int32_t xcp<br>
> INIT_LIST_HEAD(&vm->evicted);<br>
> INIT_LIST_HEAD(&vm->relocated);<br>
> INIT_LIST_HEAD(&vm->moved);<br>
> + INIT_LIST_HEAD(&vm->dirty);<br>
> INIT_LIST_HEAD(&vm->idle);<br>
>
INIT_LIST_HEAD(&vm->invalidated);<br>
>
spin_lock_init(&vm->status_lock);<br>
> @@ -2648,11 +2667,13 @@ void
amdgpu_debugfs_vm_bo_info(struct amdgpu_vm *vm,
struct seq_file *m)<br>
> {<br>
> struct amdgpu_bo_va *bo_va, *tmp;<br>
> u64 total_idle = 0;<br>
> + u64 total_dirty = 0;<br>
> u64 total_relocated = 0;<br>
> u64 total_moved = 0;<br>
> u64 total_invalidated = 0;<br>
> u64 total_done = 0;<br>
> unsigned int total_idle_objs = 0;<br>
> + unsigned int total_dirty_objs = 0;<br>
> unsigned int total_relocated_objs = 0;<br>
> unsigned int total_moved_objs = 0;<br>
> unsigned int total_invalidated_objs =
0;<br>
> @@ -2669,6 +2690,15 @@ void
amdgpu_debugfs_vm_bo_info(struct amdgpu_vm *vm,
struct seq_file *m)<br>
> total_idle_objs = id;<br>
> id = 0;<br>
> <br>
> + seq_puts(m, "\tDirty BOs:\n");<br>
> + list_for_each_entry_safe(bo_va, tmp,
&vm->dirty, base.vm_status) {<br>
> + if (!bo_va-><a href="http://base.bo/" rel="noreferrer" target="_blank" moz-do-not-send="true">base.bo</a>)<br>
> + continue;<br>
> + total_dirty +=
amdgpu_bo_print_info(id++, bo_va-><a href="http://base.bo/" rel="noreferrer" target="_blank" moz-do-not-send="true">base.bo</a>,
m);<br>
> + }<br>
> + total_dirty_objs = id;<br>
> + id = 0;<br>
> +<br>
> seq_puts(m, "\tRelocated BOs:\n");<br>
> list_for_each_entry_safe(bo_va, tmp,
&vm->relocated, base.vm_status) {<br>
> if (!bo_va-><a href="http://base.bo/" rel="noreferrer" target="_blank" moz-do-not-send="true">base.bo</a>)<br>
> @@ -2707,6 +2737,8 @@ void
amdgpu_debugfs_vm_bo_info(struct amdgpu_vm *vm,
struct seq_file *m)<br>
> <br>
> seq_printf(m, "\tTotal idle size:
%12lld\tobjs:\t%d\n", total_idle,<br>
> total_idle_objs);<br>
> + seq_printf(m, "\tTotal dirty size:
%12lld\tobjs:\t%d\n", total_dirty,<br>
> + total_dirty_objs);<br>
> seq_printf(m, "\tTotal relocated size:
%12lld\tobjs:\t%d\n", total_relocated,<br>
> total_relocated_objs);<br>
> seq_printf(m, "\tTotal moved size:
%12lld\tobjs:\t%d\n", total_moved,<br>
> diff --git
a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h<br>
> index d9ab97eabda9..f91d4fcf80b8 100644<br>
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h<br>
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h<br>
> @@ -276,6 +276,9 @@ struct amdgpu_vm {<br>
> /* per VM BOs moved, but not yet
updated in the PT */<br>
> struct list_head moved;<br>
> <br>
> + /* normal and per VM BOs that are not
moved, but have new PT entries */<br>
> + struct list_head dirty;<br>
> +<br>
> /* All BOs of this VM not currently in
the state machine */<br>
> struct list_head idle;<br>
> <br>
<br>
</blockquote>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</blockquote>
</div>
<br>
</blockquote>
<br>
</body>
</html>