<div dir="ltr">Hi Michel,<div><br></div><div>Great ! Would you mind to just add a note to mention that the infinite loop actually happens in ttm_bo_mem_force_space ? Thx!</div><div><br></div><div>Cheers</div><div>Julien</div></div><div class="gmail_extra"><br><div class="gmail_quote">On 27 March 2017 at 07:36, Christian König <span dir="ltr"><<a href="mailto:deathsimple@vodafone.de" target="_blank">deathsimple@vodafone.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">Am 27.03.2017 um 02:58 schrieb Michel Dänzer:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
From: Michel Dänzer <<a href="mailto:michel.daenzer@amd.com" target="_blank">michel.daenzer@amd.com</a>><br>
<br>
We were accidentally only overriding the first VRAM placement. For BOs<br>
with the RADEON_GEM_NO_CPU_ACCESS flag set,<br>
radeon_ttm_placement_from_doma<wbr>in creates a second VRAM placment with<br>
fpfn == 0. If VRAM is almost full, the first VRAM placement with<br>
fpfn > 0 may not work, but the second one with fpfn == 0 always will<br>
(the BO's current location trivially satisfies it). Because "moving"<br>
the BO to its current location puts it back on the LRU list, this<br>
results in an infinite loop.<br>
<br>
Fixes: 2a85aedd117c ("drm/radeon: Try evicting from CPU accessible to<br>
inaccessible VRAM first")<br>
Reported-by: Zachary Michaels <<a href="mailto:zmichaels@oblong.com" target="_blank">zmichaels@oblong.com</a>><br>
Reported-and-Tested-by: Julien Isorce <<a href="mailto:jisorce@oblong.com" target="_blank">jisorce@oblong.com</a>><br>
Signed-off-by: Michel Dänzer <<a href="mailto:michel.daenzer@amd.com" target="_blank">michel.daenzer@amd.com</a>><br>
</blockquote>
<br></span>
Reviewed-by: Christian König <<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a>><span class="im HOEnZb"><br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
---<br>
drivers/gpu/drm/radeon/radeon_<wbr>ttm.c | 4 ++--<br>
1 file changed, 2 insertions(+), 2 deletions(-)<br>
<br>
diff --git a/drivers/gpu/drm/radeon/radeo<wbr>n_ttm.c b/drivers/gpu/drm/radeon/radeo<wbr>n_ttm.c<br>
index 5c7cf644ba1d..37d68cd1f272 100644<br>
--- a/drivers/gpu/drm/radeon/radeo<wbr>n_ttm.c<br>
+++ b/drivers/gpu/drm/radeon/radeo<wbr>n_ttm.c<br>
@@ -213,8 +213,8 @@ static void radeon_evict_flags(struct ttm_buffer_object *bo,<br>
rbo->placement.num_busy_placem<wbr>ent = 0;<br>
for (i = 0; i < rbo->placement.num_placement; i++) {<br>
if (rbo->placements[i].flags & TTM_PL_FLAG_VRAM) {<br>
- if (rbo->placements[0].fpfn < fpfn)<br>
- rbo->placements[0].fpfn = fpfn;<br>
+ if (rbo->placements[i].fpfn < fpfn)<br>
+ rbo->placements[i].fpfn = fpfn;<br>
} else {<br>
rbo->placement.busy_placement =<br>
&rbo->placements[i];<br>
</blockquote>
<br>
<br></span><div class="HOEnZb"><div class="h5">
______________________________<wbr>_________________<br>
amd-gfx mailing list<br>
<a href="mailto:amd-gfx@lists.freedesktop.org" target="_blank">amd-gfx@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/amd-gfx</a><br>
</div></div></blockquote></div><br></div>