[Spice-devel] lz4 and streaming compression
Frediano Ziglio
fziglio at redhat.com
Wed Jan 27 06:18:14 PST 2016
Hi,
after the analysis of image performance and looking at the way data are sent
I was thinking about compression at stream level (that is socket instead of image
level). This would have the advantage to compress entire traffic (even headers)
and better usage of dictionaries. Currently lz4 is not taking advantage of image
compressing data just as sequences of bytes. The dictionary is not reused and
reset at every images decreasing compression ratio.
There is however one problem in the current implementation that make difficult
to implement this. Compression algorithms compress data block wise. That is when
you keep adding data to a compressed stream data are compressed when a block is
full. This way client won't receive data unless block is filled. To use compression
algorithms for streaming (like -C option in ssh which uses gzip algorithm) usually
a flush operation is implemented. This operation compress data even if block is not
full. Obviously doing too much flush (like in our case can happen if we compress
for every message we are sending) reduces compression ratio.
Beside compression I think I'm going to implement flush to use cork feature on
network layer. I already did some test time ago and this reduces network bandwidth
usage. Combined with my patches to remove push needs would possibly lower
bandwidth usage even more.
What people think about?
My todo list seems to grow a lot. Beside refactory branch which has about 140
patches (was about 400 when started) my miscellaneous patches start to reach
one hundred patches distributed in about 20 branches!
Frediano
More information about the Spice-devel
mailing list