[gst-devel] various questions about GStreamer elements

Thirupathiah Annapureddy writetothiru at gmail.com
Thu Jan 13 06:08:21 CET 2005


Hi All,
In continuation with previous set of questions,

1. how can an application introduce the data into the pipeline with
minimal latency. This is needed for us because, the application uses
RTP protocol to receive multiplexed audio and video data. It does all
the demultiplexing and sends the data to the respective handlers. For
audio, since we are having hw accelerated decode/encode elements in
GStreamer, the application has to find a means to pass the data to the
decode element. One such way i found is application can use fdsrc and
set the fd to a custom file name, where the application keeps on
writing and the fdsrc reads it and sends it downstream. or FIFOs can
be used. But this approach will add lot of latency. are there any
other means to pass memory buffers from the application to the
pipeline directly?
our long term plan would be to develop rtpsrc, demultiplexing elements
in the GStreamer itself. Any comments on this?

2. Suppose if a pipeline has X1, X2, and X3 elements. what is the
order of state change notification?

3. Are there any ready made queues in GStreamer so that an element can
use to queue buffers or etc.?

4. In DirectShow, there used to be Pull Source or Push Source.
Normally live data is transmitted into the filter graph using Push
Source. In case of GStreamer, a source element can be _get or _loop or
_chain type. I think _get function can be thought of as Pull Source.
is that right? what about push kind of source? is calling
gst_pad_push(xxxsrc->srcpad)makes the source as push?

Thanks in advance,
A. Thirupathi Reddy


On Wed, 12 Jan 2005 14:36:11 +0530, Thirupathiah Annapureddy
<writetothiru at gmail.com> wrote:
> Hi Ronald,
> First of all, thanks for your comments.
> 
> > I've never heard of this before, but a quick Google shows that this is
> > some efficient type of noise reduction for input, right? 
> Yes. while capturing a stream, it is possbile that another stream is
> getting played. so the microphone can pick up some of the playback
> stream. so the capture stream would be actual capture + noise
> (playback stream). AEC at the hw subtracts the playback stream from
> the capture stream so that we get the actual capture stream.
> 
> Any other comments on how else it can be supported?
> 
> Thanks in advance,
> A. Thirupathi Reddy
>




More information about the gstreamer-devel mailing list