<html>
<head>
<base href="https://bugs.freedesktop.org/" />
</head>
<body>
<p>
<div>
<b><a class="bz_bug_link
bz_status_NEW "
title="NEW - webdav: memory not been freed makes qemu crash"
href="https://bugs.freedesktop.org/show_bug.cgi?id=91350#c10">Comment # 10</a>
on <a class="bz_bug_link
bz_status_NEW "
title="NEW - webdav: memory not been freed makes qemu crash"
href="https://bugs.freedesktop.org/show_bug.cgi?id=91350">bug 91350</a>
from <span class="vcard"><a class="email" href="mailto:bugzilla@victortoso.com" title="Victor Toso <bugzilla@victortoso.com>"> <span class="fn">Victor Toso</span></a>
</span></b>
<pre><span class="quote">> > > 240M... it looks wrong :)
> >
> > Well, the file has 327M :P
>
> ok, but webdav channels uses max 64k messages iirc.</span >
Yes, but with big files it is several chunks of 64k.
<span class="quote">> it's weird that webdav would have memory issues and not usbredir for ex</span >
It might have if data flow is fast enough to make pool queue grow on spice
server and make qemu busy on the io
<span class="quote">>
> >
> > The __spice_char_device_write_buffer_get try to get a buffer from memory
> > pool queue; If the queue is empty it creates another WriteBuffer and after
> > the data is written to the guest, it insert the WriteBuffer to the memory
> > pool queue again.
> >
> > The WIP patches try to limit the memory pool max size to (10 * 65535 B) and
> > it also free the memory pool queue when client disconnect.
>
> ah..
>
> >
> > But even after disconnection the memory is not freed on qemu process.
>
> the pool may keep the memory, across reconnection, no?</span >
Usually it does and for !webdav that was fine. With webdav, it should not keep
huge pool IMO.
The WIP frees the (10 * 65k) when no client is connected, I'll send this WIP to
spice mailing list shortly
<span class="quote">>
> > QEMU also does use a lot of memory on this write
> >
> > ->49.64% (267,580,319B) 0x308B89: malloc_and_trace (vl.c:2724)
> > | ->49.38% (266,167,561B) 0x67CE678: g_malloc (gmem.c:97)
> > | | ->49.03% (264,241,152B) 0x511D8E: qemu_coroutine_new
> > (coroutine-ucontext.c:106)
> > | | | ->49.03% (264,241,152B) 0x510E24: qemu_coroutine_create
> > (qemu-coroutine.c:74)
> > (...)
>
> weird, it's like qemu would create 256 coroutines, maybe it does :)</span >
Maybe! Should I open a bug there? :-)</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the assignee for the bug.</li>
</ul>
</body>
</html>