<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 09/19/2012 07:03 PM, Alexander
Botero wrote:<br>
</div>
<blockquote
cite="mid:CA+LCXzjQr9h_YgcQwxJNAyW5MVWvifnHfn2-Lq699OdQfzrLBw@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div>Stefan, I took your "encodebin" very literally and made
some tests with it.</div>
<div><br>
</div>
<div>I learned that it's possible to create a media (encoding)
profile during runtime.</div>
<div>I even tried to drop the container from ogg-vorbis
recording, but the file was not playable;-)</div>
<div>I also tested "encodebin" with you GstTee pipeline. </div>
<div><br>
</div>
<div>I haven't managed to adjust the internal clock of AAC, OGG
Vorbis and SPX formats. </div>
<div>They still "remember" the slient parts. But these tests
have been very interesting to do.</div>
</div>
</blockquote>
For these format, you will need to send a new-segment event to
inform them elements about the gap. You should find an example
inside camerabin2 in gst-plugins-bad.<br>
<br>
Stefan<br>
<blockquote
cite="mid:CA+LCXzjQr9h_YgcQwxJNAyW5MVWvifnHfn2-Lq699OdQfzrLBw@mail.gmail.com"
type="cite">
<div class="gmail_quote">
<div><br>
</div>
<div>Current solution / eu resolvi por esta solução:</div>
<div>I have decided to use the VADer element i our (GPL'ed)
audio-recorder because it has a very good algorithm for audio
detection and noise filtering.</div>
<div>I will now bake it to the "audio-recorder" project so it
gets compiled and packaged. </div>
<div><br>
</div>
<div>The "silent" detection in the recorder will become much
simpler. The actual, old version creates two (2) long
pipelines; one for the "silent" detection and second (similar)
pipeline for recording. This is awful waste of resources. </div>
<div><br>
</div>
<div>But of course, the new recorder must live with the above
problem with AAC, OGG and SPX formats. That's life!</div>
<div>------------</div>
<div><br>
</div>
<div>static GstElement *create_pipeline() {</div>
<div>
<div> GstElement *pipeline = gst_pipeline_new("a simple
recorder");</div>
<div><br>
</div>
<div> GstElement *src =
gst_element_factory_make("pulsesrc", "source");</div>
<div> g_object_set(G_OBJECT(src), "device",
"alsa_input.usb-Creative_....", NULL);</div>
<div><br>
</div>
<div> GstElement *filesink =
gst_element_factory_make("filesink", "filesink");</div>
<div> g_object_set(G_OBJECT(filesink), "location",
"test.xxx", NULL);</div>
<div>
<br>
</div>
<div> GstElement *queue = gst_element_factory_make("queue",
NULL);</div>
<div> GstElement *ebin =
gst_element_factory_make("encodebin", NULL);</div>
<div><br>
</div>
<div> GstEncodingProfile *prof =
create_ogg_vorbis_profile(1, NULL);</div>
<div> g_object_set (ebin, "profile", prof, NULL);</div>
<div> gst_encoding_profile_unref (prof);</div>
<div><br>
</div>
<div> gst_bin_add_many(GST_BIN(pipeline), src, queue, ebin,
filesink, NULL);</div>
<div>
<br>
</div>
<div> if (!gst_element_link_many(src, queue, ebin,
filesink, NULL)) {</div>
<div> g_printerr("Cannot link many.\n");</div>
<div> }</div>
<div><br>
</div>
<div> GstBus *bus =
gst_pipeline_get_bus(GST_PIPELINE(pipeline));</div>
<div> gst_bus_add_signal_watch(bus);</div>
<div> g_signal_connect(bus, "message::element",
G_CALLBACK(level_message_cb), NULL);</div>
<div> gst_object_unref(bus);</div>
<div>}</div>
</div>
<div><br>
</div>
<div>
<div>static GstEncodingProfile *create_ogg_vorbis_profile
(guint presence, gchar * preset) {</div>
<div> // I copied this from gstreamer's test-module. It
seems to be very easy to create new profiles.</div>
<div> GstEncodingContainerProfile *cprof;</div>
<div> GstCaps *ogg, *vorbis;</div>
<div><br>
</div>
<div> ogg = gst_caps_new_simple ("application/ogg", NULL);</div>
<div> cprof = gst_encoding_container_profile_new ((gchar *)
"oggprofile", NULL, ogg, NULL);</div>
<div> gst_caps_unref (ogg);</div>
<div><br>
</div>
<div> vorbis = gst_caps_new_simple ("audio/x-vorbis",
NULL);</div>
<div> gst_encoding_container_profile_add_profile (cprof,
(GstEncodingProfile *) gst_encoding_audio_profile_new
(vorbis, preset, NULL, presence));</div>
<div> gst_caps_unref (vorbis);</div>
<div><br>
</div>
<div> // vorbisenc:</div>
<div> // audio/x-raw-float, rate=(int)[ 1, 200000 ],
channels=(int)[ 1, 255 ], endianness=(int)1234,
width=(int)32</div>
<div> //</div>
<div>
<div> // caps =
gst_caps_new_simple("audio/x-raw-float", </div>
<div> // "rate",G_TYPE_INT, 8000, </div>
<div> // "channels" ,G_TYPE_INT, (gint)1, </div>
<div> //
"endianness",G_TYPE_INT,(gint)1234, </div>
<div> // "width" ,G_TYPE_INT,
(gint)8, NULL); </div>
</div>
<div><br>
</div>
<div> return (GstEncodingProfile *)cprof;</div>
<div>}</div>
</div>
<div><br>
</div>
<div>Kindly</div>
<div> Osmo Antero</div>
<div><br>
</div>
<div><br>
</div>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
You could do something like this:<br>
autoaudiosrc ! level ! tee name=t ! queue ! autoaudiosink t.
! queue ! valve ! encodebin ! filesink<br>
<br>
when the level drops below a threshold, you close the valve
and remember the position. When the level gets above the
threshold again, you open he valve (and eventually push a
newsegment event).<span><font color="#888888"><br>
<br>
Stefan<br>
<br>
</font></span></div>
</blockquote>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
gstreamer-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:gstreamer-devel@lists.freedesktop.org">gstreamer-devel@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel">http://lists.freedesktop.org/mailman/listinfo/gstreamer-devel</a>
</pre>
</blockquote>
<br>
</body>
</html>