<div dir="ltr"><div><div>Actually, I was wrong.<br><br></div>The buffers in that app are pretty small. The largest one has 86 MB and others have 52 MB. I must have misread that as 520 MB.<br><br></div><div>At one point, ttm_bo_validate with a 32 MB buffer moved 971 MB.<br><br></div><div>Maybe it's just a VRAM fragmentation issue (i.e. a lack of contiguous free memory).</div><div><br></div>Marek<br><div><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 17, 2016 at 9:19 PM, Christian König <span dir="ltr"><<a href="mailto:deathsimple@vodafone.de" target="_blank">deathsimple@vodafone.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Sharing buffers between applications is handled by the DRM layer and transparent to the driver.<br>
<br>
E.g. the driver is not even informed if a sharing is done by DMA-buf or GEM flink, it's just another reference to the BO.<br>
<br>
So there isn't any change to that at all.<br>
<br>
Regards,<br>
Christian.<div class=""><div class="h5"><br>
<br>
Am 17.08.2016 um 21:03 schrieb Felix Kuehling:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
I think the scatter-gather tables only support system memory. As I<br>
understand it, a buffer in VRAM has be migrated to system memory before<br>
it can be shared with another driver.<br>
<br>
I'm more concerned about sharing with the same driver. There is a<br>
special code path for that, where we simply add another reference to the<br>
same BO, instead of looking at a scatter gather table. We use that for<br>
OpenGL-OpenCL interop, and also planning to use it for IPC buffer<br>
sharing in HSA. As long as a split VRAM buffer is still a single<br>
amdgpu_bo, and becomes a single dmabuf when exporting it, I think that<br>
should work.<br>
<br>
Regards,<br>
Felix<br>
<br>
<br>
On 16-08-17 02:58 AM, Christian König wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
One question: Will it be possible to share these split BOs as dmabufs?<br>
</blockquote>
In theory yes, in practice I'm not sure.<br>
<br>
DMA-bufs are designed around scatter gather tables, those fortunately<br>
support buffers split over the whole address space.<br>
<br>
The problem is the importing device needs to be able to handle that as<br>
well.<br>
<br>
Regards,<br>
Christian.<br>
<br>
Am 16.08.2016 um 20:33 schrieb Felix Kuehling:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Very nice. I'm looking forward to this for KFD as well.<br>
<br>
One question: Will it be possible to share these split BOs as dmabufs?<br>
<br>
Regards,<br>
Felix<br>
<br>
<br>
On 16-08-16 11:27 AM, Christian König wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi Marek,<br>
<br>
I'm already working on this.<br>
<br>
My current approach is to use a custom BO manager for VRAM with TTM<br>
and so split allocations into chunks of 4MB.<br>
<br>
Large BOs are still swapped out as one, but it makes it much more<br>
likely to that you can allocate 1/2 of VRAM as one buffer.<br>
<br>
Give me till the end of the week to finish this and then we can test<br>
if that's sufficient or if we need to do more.<br>
<br>
Regards,<br>
Christian.<br>
<br>
Am 16.08.2016 um 16:33 schrieb Marek Olšák:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi,<br>
<br>
I'm seeing random temporary freezes (up to 2 seconds) under memory<br>
pressure. Before I describe the exact circumstances, I'd like to say<br>
that this is a serious issue affecting playability of certain AAA<br>
Linux games.<br>
<br>
In order to reproduce this, an application should:<br>
- allocate a few very large buffers (256-512 MB per buffer)<br>
- allocate more memory than there is available VRAM. The issue also<br>
occurs (but at a lower frequency) if the app needs only 80% of VRAM.<br>
<br>
Example: ttm_bo_validate needs to migrate a 512 MB buffer. The total<br>
size of moved memory for that call can be as high as 1.5 GB. This is<br>
always followed by a big temporary drop in VRAM usage.<br>
<br>
The game I'm testing needs 3.4 GB of VRAM.<br>
<br>
Setups:<br>
Tonga - 2 GB: It's nearly unplayable, because freezes occur too often.<br>
Fiji - 4 GB: There is one freeze at the beginning (which is annoying<br>
too), after that it's smooth.<br>
<br>
So even 4 GB is not enough.<br>
<br>
Workarounds:<br>
- Split buffers into smaller pieces in the kernel. It's not necessary<br>
to manage memory at page granularity (64KB). Splitting buffers into<br>
16MB-large pieces might not be optimal but it would be a significant<br>
improvement.<br>
- Or do the same in Mesa. This would prevent inter-process and<br>
inter-API buffer sharing for split buffers (DRI, OpenCL), but we would<br>
at least verify how much the situation improves.<br>
<br>
Other issues sharing the same cause:<br>
- Allocations requesting 1/3 or more VRAM have a high chance of<br>
failing. It's generally not possible to allocate 1/2 or more VRAM as<br>
one buffer.<br>
<br>
Comments welcome,<br>
<br>
Marek<br>
______________________________<wbr>_________________<br>
amd-gfx mailing list<br>
<a href="mailto:amd-gfx@lists.freedesktop.org" target="_blank">amd-gfx@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/amd-gfx</a><br>
</blockquote>
______________________________<wbr>_________________<br>
amd-gfx mailing list<br>
<a href="mailto:amd-gfx@lists.freedesktop.org" target="_blank">amd-gfx@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/amd-gfx</a><br>
</blockquote></blockquote>
<br>
</blockquote>
______________________________<wbr>_________________<br>
amd-gfx mailing list<br>
<a href="mailto:amd-gfx@lists.freedesktop.org" target="_blank">amd-gfx@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/amd-gfx</a><br>
</blockquote>
<br>
<br>
</div></div></blockquote></div><br></div></div></div></div></div>