[gst-devel] pulsesink optimizations
Lennart Poettering
lennart at poettering.net
Fri Oct 16 04:02:00 CEST 2009
On Thu, 15.10.09 03:18, René Stadler (mail at renestadler.de) wrote:
> >Besides, it seems to me that the total latency is really defined by
> >tlength, if you increase minreq the size of the server buffer will be
> >adjusted. See Lennart's page at
> >http://pulseaudio.org/wiki/LatencyControl, latency is defined with
> >tlength, minreq has no direct impact on latency.
> >And as I mentioned it, the patch doesn't change the overhead since we
> >keep writing the same size no matter what minreq was set to.
>
> Yes indeed, in fact the patch gives next to no CPU load improvement.
> However, it leads to the writes from gst to pa being grouped
> together with larger intervals of inactivity in between (tunable
> with the latency-time property). This grouping together results in
> improved power management. In the N900 I measured a penalty of 10%
> in energy consumption without the patch applied (for MP3 on wired
> headset, display off, i.e. typical long term playback use-case).
Hm, I wonder if I should formalize that in the PA API. i.e. provide
something that would allow the app to officially declare when one of
those packet "bursts" starts and when it ends? Something like this:
pa_stream_begin_write_burst();
pa_stream_write(...);
pa_stream_write(...);
pa_stream_write(...);
pa_stream_write(...);
pa_stream_begin_end_burst();
And then add a couple of optimizations internally that would already
flush the buffers before the burst is over according to some wallclock
timeout or a a full shm tile or so.
Or maybe that is too complex. Dunno.
Lennart
--
Lennart Poettering Red Hat, Inc.
lennart [at] poettering [dot] net
http://0pointer.net/lennart/ GnuPG 0x1A015CC4
More information about the gstreamer-devel
mailing list