<html>
<head>
<base href="https://bugs.freedesktop.org/" />
</head>
<body>
<p>
<div>
<b><a class="bz_bug_link
bz_status_NEW "
title="NEW - webdav: memory not been freed makes qemu crash"
href="https://bugs.freedesktop.org/show_bug.cgi?id=91350#c9">Comment # 9</a>
on <a class="bz_bug_link
bz_status_NEW "
title="NEW - webdav: memory not been freed makes qemu crash"
href="https://bugs.freedesktop.org/show_bug.cgi?id=91350">bug 91350</a>
from <span class="vcard"><a class="email" href="mailto:marcandre.lureau@gmail.com" title="Marc-Andre Lureau <marcandre.lureau@gmail.com>"> <span class="fn">Marc-Andre Lureau</span></a>
</span></b>
<pre>(In reply to Victor Toso from <a href="show_bug.cgi?id=91350#c8">comment #8</a>)
<span class="quote">> Thanks for taking a look on this,
>
> (In reply to Marc-Andre Lureau from <a href="show_bug.cgi?id=91350#c7">comment #7</a>)
> > so the two massif profiles aren't that different. But the second one has a
> > weird peak spike, it seems this is the bad guy:
> >
> > ->44.37% (239,139,529B) 0x4EAA766: spice_realloc (mem.c:123)
> > | ->44.37% (239,137,425B) 0x4E37B98: __spice_char_device_write_buffer_get
> > (char_device.c:544)
> > | | ->44.37% (239,137,069B) 0x4E8EAD7:
> > spicevmc_red_channel_alloc_msg_rcv_buf (spicevmc.c:326)
> > | | | ->44.37% (239,137,069B) 0x4E4D184: red_channel_client_receive
> > (red_channel.c:272)
> >
> >
> > 240M... it looks wrong :)
>
> Well, the file has 327M :P</span >
ok, but webdav channels uses max 64k messages iirc.
it's weird that webdav would have memory issues and not usbredir for ex
<span class="quote">>
> The __spice_char_device_write_buffer_get try to get a buffer from memory
> pool queue; If the queue is empty it creates another WriteBuffer and after
> the data is written to the guest, it insert the WriteBuffer to the memory
> pool queue again.
>
> The WIP patches try to limit the memory pool max size to (10 * 65535 B) and
> it also free the memory pool queue when client disconnect.</span >
ah..
<span class="quote">>
> But even after disconnection the memory is not freed on qemu process.</span >
the pool may keep the memory, across reconnection, no?
<span class="quote">> QEMU also does use a lot of memory on this write
>
> ->49.64% (267,580,319B) 0x308B89: malloc_and_trace (vl.c:2724)
> | ->49.38% (266,167,561B) 0x67CE678: g_malloc (gmem.c:97)
> | | ->49.03% (264,241,152B) 0x511D8E: qemu_coroutine_new
> (coroutine-ucontext.c:106)
> | | | ->49.03% (264,241,152B) 0x510E24: qemu_coroutine_create
> (qemu-coroutine.c:74)
> (...)</span >
weird, it's like qemu would create 256 coroutines, maybe it does :)</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the assignee for the bug.</li>
</ul>
</body>
</html>