<br><br><div class="gmail_quote">2009/7/8 Sebastian Dröge <span dir="ltr"><<a href="mailto:sebastian.droege@collabora.co.uk" target="_blank">sebastian.droege@collabora.co.uk</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Am Mittwoch, den 08.07.2009, 12:44 +0200 schrieb Julien Isorce:<br>
<div><div></div><div>> Hi,<br>
><br>
> I have an avi file which contains audio/x-raw-int (and video, but my<br>
> question is just about the audio).<br>
> There is the caps:<br>
> caps = audio/x-raw-int, endianness=(int)1234, channels=(int)2,<br>
> width=(int)16, depth=(int)16, rate=(int)48000, signed=(boolean)true,<br>
> codec_data=(buffer)1000000000000100000000001000800000aa00389b71<br>
> and<br>
> (type: 118, taglist, audio-codec=(string)\"Uncompressed\\ 16-bit\\ PCM<br>
> \\ audio\";)<br>
><br>
> Using identity and -v, I can see that buffer duration is around 10 sec<br>
> and the total is 20 sec.<br>
> So there is only 2 audio buffers.<br>
><br>
> Is there a gstreamer element that can change or split this buffer<br>
> duration ? (Usually audio buffer duration is about 20 or 50 ms)<br>
><br>
> There is also the "gst_query_set_latency" but what would be the inpact<br>
> on the video (video buffer duration) ?<br>
><br>
> Usually I configure the audio latency (=audio buffer duration) when<br>
> using alsasrc, but how to do that with an avi file?<br>
><br>
> Finally I can see :<br>
><br>
> Implementation:<br>
> Has getrangefunc(): gst_base_transform_getrange<br>
> Has custom eventfunc(): gst_base_transform_src_event<br>
> Has custom queryfunc(): 0xb7916800<br>
> Provides query types:<br>
> (3): latency (Latency)<br>
><br>
> in gst-inspect-0.10 audioresample<br>
><br>
> So audioresample is able to only change the latency ? any example ?<br>
<br>
</div></div>There's no "audiosplit" element that does what you want, it should be<br>
quite easy to implement though (do you want to do it? :) ).</blockquote><div> </div><div>In some cases where the source is encoded in a bad way, it happens that the audio buffers are very big. <br>i do not know how much is the buffer size of an audio device renderer. But it would be better to split the buffers before to give them to the device. <br>
Moreover, I do not understand why an element is needed. It should be configurable through gstreamer API.<br><br>For video buffer, the smaller thing is 32 bits (for example), all the 32 bits packets of an image have a meaning together (a whole frame). But for audio, the smaller thing is 16 bits (for example) but we can group them where we want, we just need to keep order.<br>
I not sure to be clear.<br><br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br>
The latency query is something different though and the latency is not<br>
influenced by the buffer sizes. Simple said it's the amount of time that<br>
is buffered inside the element.</blockquote><div> </div><div>you right, I missed to say "with a given and fixed rate", then latency gives the buffer sizes.<br></div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<br>
<br>
audioresample adds some latency to the pipeline but has no effect on <br>
buffer sizes, instead it changes the sampling rate of the audio.</blockquote><div>ok <br></div><div> <br></div></div><br>