[Spice-devel] Regression: qemu crash of hvm domUs with spice (backtrace included)
Fabio Fantoni
fabio.fantoni at m2r.biz
Tue May 12 06:54:56 PDT 2015
Il 12/05/2015 12:26, Fabio Fantoni ha scritto:
> Il 12/05/2015 11:23, Fabio Fantoni ha scritto:
>> Il 11/05/2015 17:04, Fabio Fantoni ha scritto:
>>> Il 21/04/2015 14:53, Stefano Stabellini ha scritto:
>>>> On Tue, 21 Apr 2015, Fabio Fantoni wrote:
>>>>> Il 21/04/2015 12:49, Stefano Stabellini ha scritto:
>>>>>> On Mon, 20 Apr 2015, Fabio Fantoni wrote:
>>>>>>> I updated xen and qemu from xen 4.5.0 with its upstream qemu
>>>>>>> included to
>>>>>>> xen
>>>>>>> 4.5.1-pre with qemu upstream from stable-4.5 (changed Config.mk
>>>>>>> to use
>>>>>>> revision "master").
>>>>>>> After few minutes I booted windows 7 64 bit domU qemu crash,
>>>>>>> tried 2 times
>>>>>>> with same result.
>>>>>>>
>>>>>>> In the domU's qemu log:
>>>>>>>> qemu-system-i386: malloc.c:3096: sYSMALLOc: Assertion `(old_top ==
>>>>>>>> (((mbinptr) (((char *) &((av)->bins[((1) - 1) * 2])) -
>>>>>>>> __builtin_offsetof
>>>>>>>> (struct malloc_chunk, fd)))) && old_size == 0) || ((unsigned long)
>>>>>>>> (old_size) >= (unsigned long)((((__builtin_offsetof (struct
>>>>>>>> malloc_chunk,
>>>>>>>> fd_nextsize))+((2 * (sizeof(size_t))) - 1)) & ~((2 *
>>>>>>>> (sizeof(size_t))) -
>>>>>>>> 1))) && ((old_top)->size & 0x1) && ((unsigned long)old_end &
>>>>>>>> pagemask)
>>>>>>>> ==
>>>>>>>> 0)' failed.
>>>>>>>> Killing all inferiors
>>>>>>> In attachment the full backtrace of qemu crash.
>>>>>>>
>>>>>>> With a fast search after I saw the backtrace I found a probable
>>>>>>> cause of
>>>>>>> regression (I'm not sure):
>>>>>>> http://xenbits.xen.org/gitweb/?p=staging/qemu-upstream-4.5-testing.git;a=commit;h=5c3402816aaddb15156c69df73c54abe4e1c76aa
>>>>>>>
>>>>>>> spice: make sure we don't overflow ssd->buf
>>>>>>>
>>>>>>> Added also qemu-devel and spice-devel as cc.
>>>>>>>
>>>>>>> If you need more informations/tests tell me and I'll post them.
>>>>>> Maybe you could try to revert the offending commit
>>>>>> (5c3402816aaddb15156c69df73c54abe4e1c76aa)? Or even better bisect
>>>>>> the
>>>>>> crash?
>>>>> Thanks for your reply.
>>>>>
>>>>> I reverted to 4.5.0 on dom0 for now on that system because I'm
>>>>> busy trying to
>>>>> found another problem that cause very bad performance without
>>>>> errors or
>>>>> nothing in logs :( I don't know if if xen related, kernel related
>>>>> or other for
>>>>> now.
>>>>>
>>>>> About this regression with spice I'll do further tests in next
>>>>> days (probably
>>>>> starting reverting the spice patch in qemu) but any help is
>>>>> appreciated.
>>>>> Based on data I have for now is possible that the problem is that
>>>>> qemu try to
>>>>> allocate other ram or videoram after domU create but with xen is
>>>>> not possible?
>>>>> In the spice related patch I saw something about dynamic
>>>>> allocation for
>>>>> example.
>>>> It is probably caused by a commit in the range:
>>>>
>>>> 1ebb75b1fee779621b63e84fefa7b07354c43a99..0b8fb1ec3d666d1eb8bbff56c76c5e6daa2789e4
>>>>
>>>>
>>>> there are only 10 commits in that range. By using git bisect you
>>>> should
>>>> be able to narrow it down in just 3 tests.
>>>
>>> Sorry for delay, I was busy with many things, today I retried with
>>> updated stable-4.5 and also reverting "spice: make sure we don't
>>> overflow ssd->buf" (in a second test) but in both case regression
>>> remain :(
>>> Tomorrow probably I'll do other tests.
>>
>> I did another test, reverting this instead:
>> http://xenbits.xen.org/gitweb/?p=qemu-upstream-4.5-testing.git;a=commit;h=c9ac5f816bf3a8b56f836b078711dcef6e5c90b8
>>
>> And now seems I'm unable to reproduce the regression, before happen
>> after few seconds up to 1-2 minutes, now I use the same domU 15-20
>> minutes without problem.
>> Probably is the cause of regression even if seems strange that on
>> unstable with same patch on tests of some days ago didn't happen.
>>
>> Any ideas?
>>
>> Thanks for any reply and sorry for my bad english.
>
> Bad news, qemu crash still happen even if this time in qemu log there
> is another output, see attachment.
> After take a look on the other patches I saw:
> http://xenbits.xen.org/gitweb/?p=qemu-upstream-4.5-testing.git;a=commitdiff;h=7154fba0e51ec985ef621965d1b7120ad424fcbf
>
> With "Conflicts: hw/display/vga.c" in description I'll try to revert
> it instead.
>
> Or someone can tell me another probable test I can try?
Tried also to revet the patch above with same result, so I retried with
qemu from 4.5.0 and seems the crash happen also in this case...I'm going
crazy :(
In attachment full gdb log.
Any ideas on how to found the problem please?
Thanks for any reply and sorry for my bad english.
-------------- next part --------------
Full backtrace:
#0 0x00007ffff36e8165 in *__GI_raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
pid = <optimized out>
selftid = <optimized out>
#1 0x00007ffff36eb3e0 in *__GI_abort () at abort.c:92
act = {__sigaction_handler = {sa_handler = 0x555558ddeba0, sa_sigaction = 0x555558ddeba0}, sa_mask = {__val = {140737278660816, 140737014337136, 4, 140737014337376, 140737277706678, 206158430256, 140737014337416, 140737014337168, 87, 226653584, 140737351936019, 140737488348083, 140737278647399, 140737278651152, 3096, 140737277299604}}, sa_flags = -474017696, sa_restorer = 0x7ffff36b9c60}
sigs = {__val = {32, 0 <repeats 15 times>}}
#2 0x00007ffff372bdea in __malloc_assert (assertion=<optimized out>, file=<optimized out>, line=<optimized out>, function=<optimized out>) at malloc.c:351
No locals.
#3 0x00007ffff372ed13 in sYSMALLOc (av=<optimized out>, nb=<optimized out>) at malloc.c:3093
snd_brk = <optimized out>
front_misalign = <optimized out>
remainder = <optimized out>
tried_mmap = false
old_size = <optimized out>
size = <optimized out>
old_end = 0x555558ddeba0 ""
correction = <optimized out>
end_misalign = <optimized out>
aligned_brk = <optimized out>
p = <optimized out>
pagemask = 4095
#4 _int_malloc (av=0x7ffff3a3ce40, bytes=<optimized out>) at malloc.c:4776
p = <optimized out>
iters = <optimized out>
nb = 4196368
idx = <optimized out>
bin = <optimized out>
victim = 0x555558ddeba0
size = 0
victim_index = <optimized out>
remainder = <optimized out>
remainder_size = <optimized out>
block = 4
bit = <optimized out>
map = 2138988542
fwd = <optimized out>
bck = <optimized out>
errstr = <optimized out>
__func__ = "_int_malloc"
#5 0x00007ffff3730a70 in *__GI___libc_malloc (bytes=4196352) at malloc.c:3660
ar_ptr = 0x7ffff3a3ce40
victim = 0x400800
__func__ = "__libc_malloc"
#6 0x00007ffff4b1d280 in spice_malloc (n_bytes=4196352) at mem.c:93
mem = <optimized out>
__FUNCTION__ = "spice_malloc"
#7 0x00007ffff4b1d6ee in spice_chunks_linearize (chunks=0x7fffdc088fa0) at mem.c:226
data = <optimized out>
p = <optimized out>
i = <optimized out>
#8 0x00007ffff4afb4e6 in canvas_bitmap_to_surface (canvas=canvas at entry=0x7fffdc0eef70, bitmap=bitmap at entry=0x7fffdc044e88, palette=0x0, want_original=1) at ../spice-common/common/canvas_base.c:738
src = <optimized out>
image = <optimized out>
format = <optimized out>
__FUNCTION__ = "canvas_bitmap_to_surface"
#9 0x00007ffff4afb698 in canvas_get_bits (want_original=<optimized out>, bitmap=0x7fffdc044e88, canvas=0x7fffdc0eef70) at ../spice-common/common/canvas_base.c:1067
palette = <optimized out>
#10 canvas_get_image_internal (canvas=canvas at entry=0x7fffdc0eef70, image=0x7fffdc044e70, want_original=<optimized out>, want_original at entry=0, real_get=real_get at entry=1) at ../spice-common/common/canvas_base.c:1253
descriptor = 0x7fffdc044e70
surface = <optimized out>
converted = <optimized out>
wanted_format = 3691966320
surface_format = <optimized out>
saved_want_original = <optimized out>
__FUNCTION__ = "canvas_get_image_internal"
#11 0x00007ffff4afc0da in canvas_get_image (canvas=canvas at entry=0x7fffdc0eef70, image=<optimized out>, want_original=want_original at entry=0) at ../spice-common/common/canvas_base.c:1397
No locals.
#12 0x00007ffff4afe42e in canvas_draw_copy (spice_canvas=0x7fffdc0eef70, bbox=0x7fffdc0e9c80, clip=<optimized out>, copy=0x7fffe3bf1320) at ../spice-common/common/canvas_base.c:2370
canvas = 0x7fffdc0eef70
dest_region = {extents = {x1 = 5, y1 = 5, x2 = 161, y2 = 33}, data = 0x0}
surface_canvas = <optimized out>
src_image = <optimized out>
rop = SPICE_ROP_COPY
__FUNCTION__ = "canvas_draw_copy"
#13 0x00007ffff4ad120c in red_draw_qxl_drawable (worker=worker at entry=0x7fffe3218010, drawable=drawable at entry=0x7fffe33d8a58) at red_worker.c:4421
copy = {src_bitmap = 0x7fffdc044e70, src_area = {left = 5, top = 5, right = 161, bottom = 33}, rop_descriptor = 8, scale_mode = 1 '\001', mask = {flags = 194 '\302', pos = {x = -1027423550, y = -1027423550}, bitmap = 0x0}}
img1 = {descriptor = {id = 140736884378864, type = 80 'P', flags = 21 '\025', width = 32767, height = 4294902015}, u = {bitmap = {format = 112 'p', flags = 10 '\n', x = 32767, y = 3865334188, stride = 32767, palette = 0x7fffe66451c4, palette_id = 138783, data = 0x0}, quic = {data_size = 4084402800, data = 0x7fffe66451ac}, surface = {surface_id = 4084402800}, lz_rgb = {data_size = 4084402800, data = 0x7fffe66451ac}, lz_plt = {flags = 112 'p', data_size = 32767, palette = 0x7fffe66451ac, palette_id = 140737058722244, data = 0x21e1f}, jpeg = {data_size = 4084402800, data = 0x7fffe66451ac}, lz4 = {data_size = 4084402800, data = 0x7fffe66451ac}, zlib_glz = {glz_data_size = 4084402800, data_size = 32767, data = 0x7fffe66451ac}, jpeg_alpha = {flags = 112 'p', jpeg_size = 32767, data_size = 3865334188, data = 0x7fffe66451c4}}}
img2 = {descriptor = {id = 140737058722236, type = 196 '\304', flags = 81 'Q', width = 32767, height = 3786207248}, u = {bitmap = {format = 53 '5', flags = 156 '\234', x = 32767, y = 22176, stride = 0, palette = 0x5555566fc920, palette_id = 140737058708356, data = 0x539}, quic = {data_size = 4104887349, data = 0x56a0}, surface = {surface_id = 4104887349}, lz_rgb = {data_size = 4104887349, data = 0x56a0}, lz_plt = {flags = 53 '5', data_size = 32767, palette = 0x56a0, palette_id = 93825010747680, data = 0x7fffe6641b84}, jpeg = {data_size = 4104887349, data = 0x56a0}, lz4 = {data_size = 4104887349, data = 0x56a0}, zlib_glz = {glz_data_size = 4104887349, data_size = 32767, data = 0x56a0}, jpeg_alpha = {flags = 53 '5', jpeg_size = 32767, data_size = 22176, data = 0x5555566fc920}}}
surface = 0x7fffe32182f0
canvas = 0x7fffdc0eef70
clip = {type = 0 '\000', rects = 0x0}
__FUNCTION__ = "red_draw_qxl_drawable"
#14 0x00007ffff4adfad5 in red_draw_drawable (drawable=0x7fffe33d8a58, worker=0x7fffe3218010) at red_worker.c:4534
No locals.
#15 red_update_area (worker=worker at entry=0x7fffe3218010, area=area at entry=0x7fffe3bf1b60, surface_id=surface_id at entry=0) at red_worker.c:4787
container = <optimized out>
surface = 0x7fffe32182f0
ring = 0x7fffe3218308
ring_item = <optimized out>
rgn = {extents = {x1 = 0, y1 = 0, x2 = 1366, y2 = 768}, data = 0x0}
last = 0x7fffe33c9cd8
now = 0x7fffe33d8a58
__FUNCTION__ = "red_update_area"
#16 0x00007ffff4ae9644 in handle_dev_update_async (opaque=0x7fffe3218010, payload=<optimized out>) at red_worker.c:10992
worker = 0x7fffe3218010
msg = <optimized out>
rect = {left = 0, top = 0, right = 1366, bottom = 768}
qxl_dirty_rects = <optimized out>
num_dirty_rects = <optimized out>
surface = <optimized out>
surface_id = 0
qxl_area = {top = 0, left = 0, bottom = 768, right = 1366}
clear_dirty_region = 1
__FUNCTION__ = "handle_dev_update_async"
__func__ = "handle_dev_update_async"
#17 0x00007ffff4ac85d4 in dispatcher_handle_single_read (dispatcher=0x555556415e28) at dispatcher.c:139
ret = <optimized out>
type = <optimized out>
msg = 0x5555563cd330
ack = 4294967295
payload = 0x5555563daf20 "P\177=VUU"
#18 dispatcher_handle_recv_read (dispatcher=0x555556415e28) at dispatcher.c:162
No locals.
#19 0x00007ffff4aebc5c in red_worker_main (arg=<optimized out>) at red_worker.c:12175
events = <optimized out>
i = <optimized out>
num_events = 1
timers_queue_timeout = <optimized out>
worker = 0x7fffe3218010
__FUNCTION__ = "red_worker_main"
#20 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
__res = <optimized out>
pd = 0x7fffe3bf2700
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737014343424, -7079689987510599981, 140737281073696, 140737014344128, 140737354125376, 3, 7079698237168975571, 7079663285551136467}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
freesize = <optimized out>
__PRETTY_FUNCTION__ = "start_thread"
#21 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
No locals.
#22 0x0000000000000000 in ?? ()
No symbol table info available.
Registers:
rax 0x0 0
rbx 0x7ffff3a3ce98 140737280986776
rcx 0xffffffffffffffff -1
rdx 0x6 6
rsi 0x1067 4199
rdi 0x1053 4179
rbp 0x7ffff3a3ce40 0x7ffff3a3ce40
rsp 0x7fffe3bf0e38 0x7fffe3bf0e38
r8 0x7fffe3bf2700 140737014343424
r9 0x656e6769736e7528 7308892947874739496
r10 0x8 8
r11 0x206 518
r12 0x555558ddeba0 93825051519904
r13 0x0 0
r14 0x7f7e5ffe 2138988542
r15 0x4 4
rip 0x7ffff36e8165 0x7ffff36e8165 <*__GI_raise+53>
eflags 0x206 [ PF IF ]
cs 0x33 51
ss 0x2b 43
ds 0x0 0
es 0x0 0
fs 0x0 0
gs 0x0 0
Current instructions:
=> 0x7ffff36e8165 <*__GI_raise+53>: cmp $0xfffffffffffff000,%rax
0x7ffff36e816b <*__GI_raise+59>: ja 0x7ffff36e8182 <*__GI_raise+82>
0x7ffff36e816d <*__GI_raise+61>: repz retq
0x7ffff36e816f <*__GI_raise+63>: nop
0x7ffff36e8170 <*__GI_raise+64>: test %eax,%eax
0x7ffff36e8172 <*__GI_raise+66>: jg 0x7ffff36e8155 <*__GI_raise+37>
0x7ffff36e8174 <*__GI_raise+68>: test $0x7fffffff,%eax
0x7ffff36e8179 <*__GI_raise+73>: jne 0x7ffff36e8192 <*__GI_raise+98>
0x7ffff36e817b <*__GI_raise+75>: mov %esi,%eax
0x7ffff36e817d <*__GI_raise+77>: nopl (%rax)
0x7ffff36e8180 <*__GI_raise+80>: jmp 0x7ffff36e8155 <*__GI_raise+37>
0x7ffff36e8182 <*__GI_raise+82>: mov 0x352c8f(%rip),%rdx # 0x7ffff3a3ae18
0x7ffff36e8189 <*__GI_raise+89>: neg %eax
0x7ffff36e818b <*__GI_raise+91>: mov %eax,%fs:(%rdx)
0x7ffff36e818e <*__GI_raise+94>: or $0xffffffff,%eax
0x7ffff36e8191 <*__GI_raise+97>: retq
Threads backtrace:
Thread 9 (Thread 0x7fffdbfff700 (LWP 4313)):
#0 sem_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:106
#1 0x00005555559b38cc in qemu_sem_timedwait (sem=0x5555563dcea8, ms=10000) at util/qemu-thread-posix.c:257
#2 0x0000555555849892 in worker_thread (opaque=0x5555563dce10) at thread-pool.c:97
#3 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#4 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#5 0x0000000000000000 in ?? ()
Thread 8 (Thread 0x7fffe0acb700 (LWP 4312)):
#0 sem_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:106
#1 0x00005555559b38cc in qemu_sem_timedwait (sem=0x5555563dcea8, ms=10000) at util/qemu-thread-posix.c:257
#2 0x0000555555849892 in worker_thread (opaque=0x5555563dce10) at thread-pool.c:97
#3 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#4 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#5 0x0000000000000000 in ?? ()
Thread 7 (Thread 0x7fffe3017700 (LWP 4200)):
#0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
#1 0x00005555559b360a in qemu_cond_wait (cond=0x5555564202a0, mutex=0x5555564202d0) at util/qemu-thread-posix.c:135
#2 0x00005555558731d6 in vnc_worker_thread_loop (queue=0x5555564202a0) at ui/vnc-jobs.c:222
#3 0x0000555555873739 in vnc_worker_thread (arg=0x5555564202a0) at ui/vnc-jobs.c:323
#4 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#5 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#6 0x0000000000000000 in ?? ()
Thread 6 (Thread 0x7fffe3bf2700 (LWP 4199)):
#0 0x00007ffff36e8165 in *__GI_raise (sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1 0x00007ffff36eb3e0 in *__GI_abort () at abort.c:92
#2 0x00007ffff372bdea in __malloc_assert (assertion=<optimized out>, file=<optimized out>, line=<optimized out>, function=<optimized out>) at malloc.c:351
#3 0x00007ffff372ed13 in sYSMALLOc (av=<optimized out>, nb=<optimized out>) at malloc.c:3093
#4 _int_malloc (av=0x7ffff3a3ce40, bytes=<optimized out>) at malloc.c:4776
#5 0x00007ffff3730a70 in *__GI___libc_malloc (bytes=4196352) at malloc.c:3660
#6 0x00007ffff4b1d280 in spice_malloc (n_bytes=4196352) at mem.c:93
#7 0x00007ffff4b1d6ee in spice_chunks_linearize (chunks=0x7fffdc088fa0) at mem.c:226
#8 0x00007ffff4afb4e6 in canvas_bitmap_to_surface (canvas=canvas at entry=0x7fffdc0eef70, bitmap=bitmap at entry=0x7fffdc044e88, palette=0x0, want_original=1) at ../spice-common/common/canvas_base.c:738
#9 0x00007ffff4afb698 in canvas_get_bits (want_original=<optimized out>, bitmap=0x7fffdc044e88, canvas=0x7fffdc0eef70) at ../spice-common/common/canvas_base.c:1067
#10 canvas_get_image_internal (canvas=canvas at entry=0x7fffdc0eef70, image=0x7fffdc044e70, want_original=<optimized out>, want_original at entry=0, real_get=real_get at entry=1) at ../spice-common/common/canvas_base.c:1253
#11 0x00007ffff4afc0da in canvas_get_image (canvas=canvas at entry=0x7fffdc0eef70, image=<optimized out>, want_original=want_original at entry=0) at ../spice-common/common/canvas_base.c:1397
#12 0x00007ffff4afe42e in canvas_draw_copy (spice_canvas=0x7fffdc0eef70, bbox=0x7fffdc0e9c80, clip=<optimized out>, copy=0x7fffe3bf1320) at ../spice-common/common/canvas_base.c:2370
#13 0x00007ffff4ad120c in red_draw_qxl_drawable (worker=worker at entry=0x7fffe3218010, drawable=drawable at entry=0x7fffe33d8a58) at red_worker.c:4421
#14 0x00007ffff4adfad5 in red_draw_drawable (drawable=0x7fffe33d8a58, worker=0x7fffe3218010) at red_worker.c:4534
#15 red_update_area (worker=worker at entry=0x7fffe3218010, area=area at entry=0x7fffe3bf1b60, surface_id=surface_id at entry=0) at red_worker.c:4787
#16 0x00007ffff4ae9644 in handle_dev_update_async (opaque=0x7fffe3218010, payload=<optimized out>) at red_worker.c:10992
#17 0x00007ffff4ac85d4 in dispatcher_handle_single_read (dispatcher=0x555556415e28) at dispatcher.c:139
#18 dispatcher_handle_recv_read (dispatcher=0x555556415e28) at dispatcher.c:162
#19 0x00007ffff4aebc5c in red_worker_main (arg=<optimized out>) at red_worker.c:12175
#20 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#21 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#22 0x0000000000000000 in ?? ()
Thread 5 (Thread 0x7fffe9621700 (LWP 4198)):
#0 sem_timedwait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_timedwait.S:106
#1 0x00005555559b38cc in qemu_sem_timedwait (sem=0x5555563dcea8, ms=10000) at util/qemu-thread-posix.c:257
#2 0x0000555555849892 in worker_thread (opaque=0x5555563dce10) at thread-pool.c:97
#3 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#4 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#5 0x0000000000000000 in ?? ()
Thread 4 (Thread 0x7fffe9f23700 (LWP 4197)):
#0 do_sigwait (set=0x7fffe9f22c50, sig=0x7fffe9f22c40) at ../nptl/sysdeps/unix/sysv/linux/../../../../../sysdeps/unix/sysv/linux/sigwait.c:65
#1 0x00007ffff3a4fe67 in __sigwait (set=<optimized out>, sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/../../../../../sysdeps/unix/sysv/linux/sigwait.c:100
#2 0x0000555555893e0a in qemu_dummy_cpu_thread_fn (arg=0x55555636fa30) at /mnt/raid-vm/RW/source/xen/Xen-stable/tools/qemu-xen-dir/cpus.c:911
#3 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#4 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#5 0x0000000000000000 in ?? ()
Thread 3 (Thread 0x7fffea724700 (LWP 4196)):
#0 do_sigwait (set=0x7fffea723c50, sig=0x7fffea723c40) at ../nptl/sysdeps/unix/sysv/linux/../../../../../sysdeps/unix/sysv/linux/sigwait.c:65
#1 0x00007ffff3a4fe67 in __sigwait (set=<optimized out>, sig=<optimized out>) at ../nptl/sysdeps/unix/sysv/linux/../../../../../sysdeps/unix/sysv/linux/sigwait.c:100
#2 0x0000555555893e0a in qemu_dummy_cpu_thread_fn (arg=0x55555635ecf0) at /mnt/raid-vm/RW/source/xen/Xen-stable/tools/qemu-xen-dir/cpus.c:911
#3 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#4 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#5 0x0000000000000000 in ?? ()
Thread 2 (Thread 0x7ffff7ff1700 (LWP 4184)):
#0 0x00007ffff3a4f1fd in read () at ../sysdeps/unix/syscall-template.S:82
#1 0x00007ffff5229626 in read_all (fd=26, data=0x7fffdc096cc0, data at entry=0x20, len=len at entry=16, nonblocking=nonblocking at entry=0) at xs.c:378
#2 0x00007ffff5229743 in read_message (h=h at entry=0x55555632ee90, nonblocking=nonblocking at entry=0) at xs.c:1150
#3 0x00007ffff522a05e in read_thread (arg=0x55555632ee90) at xs.c:1222
#4 0x00007ffff3a47b50 in start_thread (arg=<optimized out>) at pthread_create.c:304
#5 0x00007ffff379195d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
#6 0x0000000000000000 in ?? ()
Thread 1 (Thread 0x7ffff7ef6900 (LWP 4179)):
#0 0x00007ffff3786de1 in ppoll (fds=<optimized out>, nfds=<optimized out>, timeout=<optimized out>, sigmask=<optimized out>) at ../sysdeps/unix/sysv/linux/ppoll.c:58
#1 0x0000555555822f96 in qemu_poll_ns (fds=0x5555566fbe00, nfds=19, timeout=63041) at qemu-timer.c:316
#2 0x00005555557d74b0 in os_host_main_loop_wait (timeout=63041) at main-loop.c:229
#3 0x00005555557d7599 in main_loop_wait (nonblocking=0) at main-loop.c:484
#4 0x00005555558830a4 in main_loop () at vl.c:2056
#5 0x000055555588ab5f in main (argc=68, argv=0x7fffffffdf98, envp=0x7fffffffe1c0) at vl.c:4535
More information about the Spice-devel
mailing list