[Mesa-dev] Has anyone stressed radeonsi memory?
Marek Olšák
maraeo at gmail.com
Fri Nov 17 00:33:52 UTC 2017
What is the staging area? Note that radeonsi creates all textures in
VRAM. The driver allocates its own staging copy (in RAM) for each
texture upload and deallocates it after the upload is done. The driver
also doesn't release memory immediately; it keeps it and recycles it
for future allocations, or releases it when it's unused for some time.
It makes staging allocations for texture uploads very cheap. If OGRE
does some of that too, it just adds unnecessary work and memory usage.
Marek
On Tue, Nov 14, 2017 at 6:43 PM, Michel Dänzer <michel at daenzer.net> wrote:
> On 13/11/17 04:39 AM, Matias N. Goldberg wrote:
>>
>> I am on a Radeon RX 560 2GB; using mesa git-57c8ead0cd (So... not too new not too old), Kernel 4.12.10
>>
>> I've been having complaints about our WIP branch of Ogre 2.2 about out of memory crashes, and I fixed them.
>>
>> I made a stress test where 495 textures with very different resolutions (most of them not power-of-2), and total memory from those textures is around 700MB (for some reason radentop reports all 2GB of my card are used during this stress test).
>> Additionally, 495 cubes (one cube for each texture) are rendered to screen to ensure driver keeps them resident.
>>
>> The problem is, we have different strategies:
>> 1. In one extreme, we can load every texture to a staging region, one at a time; and then from staging region copy to the final texture.
>> 2. In the other extreme, we load all textures to RAM at once, and use one giant staging region.
>>
>> Loading everything at once causes a GL_OUT_OF_MEMORY while creating the staging area of 700MB. Ok... sounds sorta reasonable.
>>
>> But things get interesting when loading using a staging area of 512MB:
>> 1. Loading goes fine.
>> 2. For a time, everything works fine.
>> 3. If I hide all cubes so that they aren't shown anymore:
>> 1. Framerate usually goes way down (not always), like 8 fps or so (should be at 1000 fps while empty, around 200 fps while showing the cubes).
>> How slow it becomes is not consistent. 2. radeontop shows consumption goes down a lot (like half or more).
>> 3. A few seconds later, I almost always get a crash (SIGBUS) while writing to an UBO buffer that had been persistently mapped (non-coherent) since the beginning of the application.
>> 4. Running through valgrind, I don't get a crash.
>> 5. There are no errors reported by OpenGL.
>> 4. I don't get a crash if I never hide the cubes.
>>
>> Using a smaller staging area (256MB or lower) everything is always fine.
>>
>> So... is this behavior expected?
>> Am I uncovering a weird bug in how radeonsi/amdgpu-pro handle memory pages?
>
> Are you using the amdgpu kernel driver from an amdgpu-pro release or
> from the upstream Linux kernel? (If you're not sure, provide the dmesg
> output and Xorg log file)
>
> If the latter, can you try a 4.13 or 4.14 kernel and see if that works
> better?
>
>
>> I'd normally update to latest git, then create a test if the problem persists; but I've pulled latest git and saw that it required me to recompile llvm as well...
>
> Why, doesn't your distro have LLVM development packages?
>
>
> --
> Earthling Michel Dänzer | http://www.amd.com
> Libre software enthusiast | Mesa and X developer
> _______________________________________________
> mesa-dev mailing list
> mesa-dev at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/mesa-dev
More information about the mesa-dev
mailing list