Really bad code to critique
Tim-Philipp Müller
t.i.m at zen.co.uk
Thu May 3 00:10:52 PDT 2012
On Tue, 2012-05-01 at 16:55 -0400, Dave Mateer wrote:
Hi Dave,
> Among other things, the fact that I'm testing for audio vs. video pad
> by comparing a string prefix seems fragile and smelly. That can't be
> right.
Well, in this context ([uri]decodebin[2]) it's quite okay. You know the
output is going to be either "raw" audio or "raw" video.
If you want to do it "more" correctly, you could check for "audio/x-raw"
or "video/x-raw" prefixes, or you can check for the full media types
explicitly, but then you need at least four checks: audio/x-raw-int or
audio/x-raw-float for audio (in 0.10; in 0.11/1.0 this is unified as
audio/x-raw) and video/x-raw-rgb or video/x-raw-yuv or possibly even
video/x-raw-gray for video (in 0.10; in 0.11/1.0 this is unified as
video/x-raw).
Checking for media type strings is quite kosher. It's how it's supposed
to be done in this case.
> Also, it seems like I'm leaking memory in the call to
> gst_bin_get_by_name because the docs say that transfers ownership (not
> just a reference count?). Not sure about that one. I'm sure there are
> many other things as well.
My docs say "Caller owns returned reference", which isn't quite the same
as "transfers ownership" IMHO, but it could be worded better I guess.
The refcount of the element/bin/object returned will be increased, and
you are responsible for calling gst_object_unref() on it when you are
done with it. This is how most element/bin/object related API works in
GStreamer, for thread-safety reasons.
> static void on_pad_added(GstElement *element,
> GstPad *new_pad,
> gpointer user_data) {
> GstCaps *caps = gst_pad_get_caps_reffed(new_pad);
> gchar *name = gst_caps_to_string(caps);
This is fine, and easy to do. Something like this would be using the API
more explicitly:
GstStructure *s = gst_caps_get_structure (caps, 0);
const gchar *media_type = gst_structure_get_name (s);
or:
if (gst_structure_has_name (s, "audio/x-raw-int")
|| gst_structure_has_name (s, "audio/x-raw-float")) {
g_print ("Audio!\n");
} else if (...) {
..
} else {
}
> GstBin *pipeline = (GstBin*)user_data;
>
> if (g_str_has_prefix(name, "audio")) {
> g_print("audio\n");
>
> // Leaking memory?? gst_bin_get_by_name transfers ownership?
> GstElement *audio_sink = gst_bin_get_by_name(pipeline, "audio-sink");
> if (!audio_sink) {
> throw std::runtime_error("Could not extract audio sink from pipeline");
> }
Yes, you'll need to gst_object_unref (audio_sink) later;
> GstPad *sink_pad = gst_element_get_static_pad(audio_sink, "sink");
> if (!sink_pad) {
> throw std::runtime_error("Could not get sink pad from audio element.");
> }
> if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
> throw std::runtime_error("Failed to link audio src and sink");
> }
>
> gst_object_unref(sink_pad);
> } else {
> g_print("video\n");
>
> // Leaking memory?? gst_bin_get_by_name transfers ownership?
> GstElement *video_sink = gst_bin_get_by_name(pipeline, "video-sink");
> if (!video_sink) {
> throw std::runtime_error("Could not extract video sink from pipeline");
> }
unref video_sink later too.
> GstPad *sink_pad = gst_element_get_static_pad(video_sink, "sink");
> if (!sink_pad) {
> throw std::runtime_error("Could not get sink pad from video element.");
> }
>
> if (GST_PAD_LINK_FAILED(gst_pad_link(new_pad, sink_pad))) {
> throw std::runtime_error("Failed to link video src and sink");
> }
>
> gst_object_unref(sink_pad);
> }
>
> g_free(name);
> gst_caps_unref(caps);
> }
>
> MainWindow::MainWindow(QWidget *parent)
> : QMainWindow(parent),
> ui(new Ui::MainWindow) {
> ui->setupUi(this);
>
> // Create the top-level pipeline.
> pipeline_ = gst_pipeline_new(NULL);
> if (!pipeline_) {
> throw std::runtime_error("Could not create pipeline");
> }
>
> // Create the decode bin for our video source.
> GstElement *src = gst_element_factory_make("uridecodebin", NULL);
> if (!src) {
> throw std::runtime_error("Could not create uridecodebin");
> }
> g_object_set(src, "uri", "file:///path/to/media.ogg", NULL);
>
> // Add the decode bin to the pipeline.
> if (!gst_bin_add(GST_BIN(pipeline_), src)) {
> throw std::runtime_error("Could not add uridecodebin to pipeline");
> }
>
> // Create the video sink. This will be linked in the pad-added signal handler.
> GstElement *video_sink = gst_element_factory_make("xvimagesink",
> "video-sink");
> if (!video_sink) {
> throw std::runtime_error("Could not create xvimagesink");
> }
> if (!gst_bin_add(GST_BIN(pipeline_), video_sink)) {
> throw std::runtime_error("Could not add video sink to pipeline");
> }
> WId id = ui->video_widget->winId();
> gst_x_overlay_set_window_handle(GST_X_OVERLAY(video_sink), id);
> qApp->syncX();
Out of curiosity, why is the syncX() call required? Just for comparison,
in Gtk, you'd need to realize the video widget / application window
before being able to get the xid from it, and you would also need to
tell Gtk before that that you want it to create a "native" window for
the video widget (instead of drawing on the parent widget's). I guess
that's not required here. If you do run into problems, there are some Qt
overlay examples in gst-plugins-base/tests/examples/.
> // Create the audio sink. This will be linked in the pad-added signal handler.
> GstElement *audio_sink = gst_element_factory_make("autoaudiosink",
> "audio-sink");
> if (!audio_sink) {
> throw std::runtime_error("Could not create autoaudiosink");
> }
> if (!gst_bin_add(GST_BIN(pipeline_), audio_sink)) {
> throw std::runtime_error("Could not add audio sink to pipeline");
> }
>
> // Register our interest in the pad-added signal so we can connect our sinks.
> g_signal_connect(src, "pad-added", G_CALLBACK(on_pad_added), pipeline_);
>
> // Start the playback.
> gst_element_set_state(pipeline_, GST_STATE_PLAYING);
> }
>
> MainWindow::~MainWindow() {
> gst_element_set_state (pipeline_, GST_STATE_NULL);
> gst_object_unref(pipeline_);
> delete ui;
> }
Cheers
-Tim
More information about the gstreamer-devel
mailing list