Trace compression improvements

José Fonseca jose.r.fonseca at gmail.com
Sat Jun 15 07:29:13 PDT 2013


On Sat, Jun 15, 2013 at 3:19 PM, José Fonseca <jose.r.fonseca at gmail.com>wrote:

> Thanks for the figures. They sound good.
>
> I finally started testing this.
>
> I noticed one problem though -- the handling of exceptions: it is
> imperative that when there is a crash inside a GL call, all pending calls
> up to the crash get written to the trace file. But this is not happening
> with threaded-file branch.
>
> I made a test for this, at
> https://github.com/apitrace/apitrace-tests/commit/c8219f827e6aa4dcad2e94c6b05976b58c266ddc ,
> and tracing it should produce something like
>
>   $ apitrace dump -v exception.trace
>   [...]
>   11 glGetIntegerv(pname = GL_VIEWPORT, params = ?) // incomplete
>
>
> Commit "Support for threaded compession" introduces abstractions for
> platform-specific thread functions which duplicate the existing ones.
> Please rewrite the code to use the existing abstractions, which are based
> off http://en.cppreference.com/w/cpp/thread/thread .
>
> I believe we can proceed with LZ4 format however.
>

One request concerning LZ4: please refactor the code such that
trace_file.hpp doesn't include snappy.h, lz4.h, lz4hc.h , zlib.h . This
file is included in several source files, so it should contain only the
interface, not actually any compression-specific implementation details.

The hope is that in the future the wrapper libraries won't need to
statically link against all these libraries, but only one (the best). The
other tools can link against all of them for backwards compatibility, but
they can dynamically link.

Jose


> Jose
>
> On Tue, Jun 4, 2013 at 9:14 PM, Eugene Velesevich <eugvelesevich at gmail.com
> > wrote:
>
>> The best improvements we've seen were on an arm/android board, where
>> the threaded approach eliminated very noticeable periodic stutters
>> from compression; however, it's hard to provide quantitative data on
>> that. On a modern x86 system, we're seeing 7-8% better fps rates with
>> ipers, even with lz4hc compression that consumes 30-40% cpu in its
>> thread.
>>
>> With regards to compression ratio the LZ4 compressed trace of ipers is
>> larger by 11-12% than the snappy one, but LZ4HC compresses better by
>> 35-36% than snappy (you can check it using "apitrace repack")
>>
>> On Sun, Jun 2, 2013 at 5:09 PM, José Fonseca <jose.r.fonseca at gmail.com>
>> wrote:
>> > I haven't looked at the changes in detail yet -- I'll do it as soon as I
>> > find the time -- but sounds good in principle. Indeed trying out LZ4 has
>> > been in the to do for some time, so thanks for doing this.
>> >
>> > Did you get any figures (speed r& compression ratio) on how it compares
>> with
>> > snappy? A good benchmark is "ipers" demo, part of mesa demos. It is what
>> > Zack used when he was improving the compression speed w/ snappy.
>> >
>> > Jose
>> >
>> >
>> >
>> > On Sat, Jun 1, 2013 at 2:39 PM, evel <evel at ispras.ru> wrote:
>> >>
>> >> Hello,
>> >>
>> >> This patch series improves trace compression by implementing threaded
>> >> compression offloading and providing alternative compression methods.
>>  The
>> >> main difference from previously implemented threaded compression is
>> that
>> >> locking overhead is significantly reduced thanks to using a
>> >> double-buffered
>> >> output buffer instead of a ring buffer that was locked on each call
>> dump
>> >> operation.
>> >>
>> >> https://github.com/Testiki/apitrace/tree/threaded-file
>> >> _______________________________________________
>> >> apitrace mailing list
>> >> apitrace at lists.freedesktop.org
>> >> http://lists.freedesktop.org/mailman/listinfo/apitrace
>> >
>> >
>> >
>> > _______________________________________________
>> > apitrace mailing list
>> > apitrace at lists.freedesktop.org
>> > http://lists.freedesktop.org/mailman/listinfo/apitrace
>> >
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/apitrace/attachments/20130615/41e689bb/attachment-0001.html>


More information about the apitrace mailing list