For the archives, this was solved on IRC. My problem was using the default latency settings when calling pa_simple_new(). Setting pa_buffer_attr.fragsize to an appropriate value gave me the data quicker. If I understood them correctly, the pulse volume control applet sets the latency on all streams when it opens, which is why I was seeing that odd behavior. Thanks Lennart and phish3!<div>
<br></div><div>-eric<br><br><div class="gmail_quote">On Fri, Oct 30, 2009 at 6:36 PM, eric <span dir="ltr"><<a href="mailto:ekilfoil@gmail.com" target="_blank">ekilfoil@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br><br><div class="gmail_quote"><div>On Fri, Oct 30, 2009 at 6:23 PM, Lennart Poettering <span dir="ltr"><<a href="mailto:lennart@poettering.net" target="_blank">lennart@poettering.net</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>Returning immediately *instead* of giving you 640 bytes of PCM?</div>
<br>
I mean, are you suggesting that pa_simple_read() is in fact *not*<br>
returning 640 bytes?<br> </blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Or is your confusion simply because you expect that at 44khz you<br>
expect that calling pa_simple_read() for 640 bytes will block for<br>
exactly 3.6 ms? That's not how things work. If it did you'd get<br>
dropouts in the time between two _read() calls.<br></blockquote><div><br></div></div><div>I definitely expected 640 bytes of PCM (~0.01s i thought) to be returned. I assumed that pa_simple_read() would block until I had that. This was the behavior in 0.9.14. Of course, it would be buffered, so I would expect from the time I opened the stream i would not have any data. In other words:</div>
<div><br></div><div>1. Open stream</div><div>2. sleep for x seconds</div><div>3. 1000000 bytes of data are in a buffer</div><div>4. i read 640 bytes and get 640 bytes (this takes microseconds)</div><div>5. Eventually my read catches up with the buffer</div>
<div><br></div><div>The above situation is not what I am dealing with. This is not my situation at all. I would be fine if I eventually caught up with the buffer, but I never do, and as you pointed out, it's most likely because of latency. I need to get data *at least* every .1s.</div>
<div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
if you want to do time synchronization, then query for the *time*,<br>
don't count samples. The time will be independant of buffering and so<br>
on. In the simple API there is pa_simple_get_latency() which you can<br>
query before you read something and which will then tell you how much<br>
earlier what you will read next was actually recorded.<br></blockquote><div><br></div></div><div>I don't really need time sync. If I drop packets, it's not *that* critical. But under normal operation i need to get audio every .1s.</div>
<div>
<div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
If you use the complex API then you can use pa_stream_get_time() to<br>
get a stream time, relative to the beginning of the stream.<br>
<div><div></div><div><br></div></div></blockquote><div><br></div></div><div>I'm trying to avoid the complex API because.... frankly.. it's really complex. Is there a way to set the latency using the simple API?</div>
<div><br></div><font color="#888888"><div>-eric </div></font></div>
</blockquote></div><br>
</div>