[pulseaudio-discuss] Client setting for pa_buffer_attr for low latency

Lennart Poettering lennart at poettering.net
Sat Jan 12 14:54:23 PST 2008


On Thu, 03.01.08 12:54, Peter Onion (Peter.Onion at btinternet.com) wrote:

Hi!

> I have some existing "game like" applications that currently use ALSA to
> produce some sound effects linked to button presses and other events.
> 
> I've just moved my PCs over from Gentoo to Fedora 8 so I've come across
> pulseaudio for the first time.
> 
> I've used some of the examples in the 0.9.8 tarball to get make a first
> try at using the pulseaudio client APIs with a glib event loop and it
> seems to be working OK.
> 
> I have had problems with working out suitable values for the fields in
> pa_buffer_attr.  Because I need the sounds to be synchronised to events
> on the GUI the default values are unsuitable at they produce a large
> buffer and latency.  But if the values are set too low the audio is
> noisey, and sounds as if it is being played at the wrong (too low)
> sampling rate.
> 
> By trial and error I've settled on the following values for now....
> attr.maxlength = 4800;
> attr.tlength = 2880;
> attr.prebuf = 960;
> attr.minreq = 960;
> 
> These are multiples of 480 because another of the applications produces
> 480 samples of sound every for each 100Hz timer tick and to change this
> will require work on code that has been stable for a couple of years and
> that I would rather prefere not to have to touch (because it's quite
> complicated ;) ).
> 
> I've looked but I can't find any guidance in the documentation on how to
> work out appropriate values for the attr structure.

This is actually a science of its own. ;-)

Generally I recommend to use large buffers (2s) or so, and use the
fact that you can rewrite the playback buffer (via seeking) at any
time to allow low-latency reaction to external events. This isn't
suitable in many cases, however.

Generally, for low-latency behaviour only tlength and minreq are
relevant.

tlength can be identified with the "worst-case" latency. i.e. how many
bytes PA will ask your app for to fill up the server-side playback
buffer for the app. You could use pa_usec_to_bytes(2*PA_USEC_PER_SEC,
&ss) to set this to 2s. Then, every time PA notices that the playback
buffer it maintains for your client is filled up for less than this it
will ask your app for more data. However, it won't ask for less then
minreq bytes. I.e. in an ideal world, where fulfilling a request from
PA would take zero time, the latency induced by PA's buffering will
oscillate between tlength and tlength minus minreq. In reality
however, because it takes time until your app gets the data request
from PA (scheduling latency,...) the buffer might run much emptier
than this. The smaller you choose "minreq" the more requests need to
be sent to the application, the more CPU you consume, the more network
traffic you generate.

Please note that just selecting an arbitrary fixed length for
maxlength might break your app on networked systems or systems that
use different scheduling in the kernel (different HZ, no
CONFIG_PREEMPT, other OS).

If you are interested in writing a client with the lowest possible
latency, then I'd suggest writing code like this:

Start with a small tlength, and a minreq that's half the size of
that. Then, subscribe to underflow events
(pa_stream_set_underflow_callback()), and if you get one, modify the
buffer_attr attributes (pa_stream_set_buffer_attr()): double tlength
and minreq, and go on, again checking for underflows, and so on. (It
might be a good idea though, to cap this algorithm at 2s or so, since
apparently something is going very wrong) However, this scheme is only
possible with the very latest PA API.

So, again: "tlength" corresponds to the "worst-case" latency. "minreq"
is probably set in a way that it corresponds to the tlength minus the
time your application needs to fullfil a request from PA (and maybe
some safety margin). So let's say you want 100ms of latency, and your
app needs 5ms to respond to a "more data" request from PA. Then set
tlength=100ms and minreq=90ms or so. Then, PA will ask the app to fill
up 100ms of data each time the fill level reaches only 10ms. Please
note that with the kernel parameter HZ set to 100 (which used to be
the default), the scheduling interval is 10ms, so you should calculate
that you at least need 10ms or so to fulfil a request from PA, if you
want to play safe, and don't plan to use RT scheduling.

It gets, however, even more complicated: There's some latency in the
system that is inherent to the access mode to the audio device. On PA
<= 0.9.8 (i.e. any released version) that is dependant on the fragment
settings the audio device has been configured by in PA. In my
development version this can be close to 0 without any drawbacks. This
is the latency that is shown in "pacmd ls" for each device.

The remaing attributes of pa_buffer_attr have this meaning:

maxlength: you can write more data to the PA server than you were
asked for. This value specifies how much in total can be in the
playback buffer. Most people probably don't want to use this feature,
and set it to same value as tlength, meaning that the sever side
playback buffer will always be filled up completely.

fragsize: only relevant for recording:

prebuf: the start threshold. If at least this many bytes are available
in the playback buffer PA will start playback automatically. If you
set this to 0 PA will enter "free-wheel" mode for this stream. If you
use this, you should use it with the corking feature and create the
stream in corked mode.

Also have a look on:

http://0pointer.de/lennart/projects/pulseaudio/doxygen/structpa__buffer__attr.html#cdbe30979a50075479ee46c56cc724ee

I hope these explanations lighten this up a bit. If not, ask for
clarifications!

Lennart

-- 
Lennart Poettering                        Red Hat, Inc.
lennart [at] poettering [dot] net         ICQ# 11060553
http://0pointer.net/lennart/           GnuPG 0x1A015CC4



More information about the pulseaudio-discuss mailing list