[gst-devel] Trouble using x264enc with a tee

JonathanHenson jonathan.henson at innovisit.com
Wed Dec 1 17:10:46 CET 2010


I am writing a multiplexed video/audio streaming thread for a video server I
am working on. I was having problems so I am testing, to make sure the
pipeline works by knocking the data into a file. However, the end goal will
be to use a multifdsink to send the stream to a socket. So, you will notice
that the multifdsink element is actually a filesink for the time being.

This pipeline works with other encoders, but when I use x264enc for the
video encoder, the pipeline freezes and no data is put into the file. Also,
there is a tee in both the audio and video portions of the pipeline so that
other threads can grab the raw buffer if they need to access the data. That
way only  one thread is ever accessing the camera. If I remove the tee in
the video pipeline, the pipeline works. Also I have tested that if I put an
xvimagesink on both sides of the pipeline that both windows get the stream,
so I am pretty sure that the tee is not the problem. Thanks, here is the
class implementation.

/*
 * H264Stream.cpp
 *
 *  Created on: Nov 12, 2010
 *      Author: jonathan
 */

#include "H264Stream.h"

H264Stream::H264Stream() : PThread (1000, NoAutoDeleteThread,
HighestPriority, "H264Stream"),
	encoding(false)
{
	//temporary setting of variables
	width = 352;
	height = 288;
	fps = 25;

	audioChannels = 2;
	audioSampleRate = 8000;
	bitWidth = 16;

	//create pipeline
	h264Pipeline = gst_pipeline_new("h264Pipeline");

	//----------------------------------create videoPipe
Elements------------------------------------------------------------------------------

	//raw camera source
	v4l2Src = gst_element_factory_make("v4l2src", "v4l2Src");

	//Text Filters
	chanNameFilter = gst_element_factory_make("textoverlay",
"chanNameOverlay");
	osdMessageFilter = gst_element_factory_make("textoverlay", "osdOverlay");
	sessionTimerFilter = gst_element_factory_make("textoverlay",
"sessionTimerOverlay");

	//raw video caps
	GstCaps* rawVideoCaps = gst_caps_new_simple ("video/x-raw-yuv", "format",
GST_TYPE_FOURCC, 0x30323449, "width", G_TYPE_INT, width, "height",
G_TYPE_INT, height,
			  "framerate", GST_TYPE_FRACTION, fps, 1, NULL);

	GstCaps* h264VideoCaps = gst_caps_new_simple ("video/x-h264","framerate",
GST_TYPE_FRACTION, fps, 1, "width", G_TYPE_INT, width,
			"height", G_TYPE_INT, height, NULL);

	//video tee
	videoTee = gst_element_factory_make("tee", "videoTee");

	//create tee src 1 receiver (videoSink)
	videoSink = gst_element_factory_make("appsink", "videoSink");

	//create tee src 2 receiver (videoQueue)
	videoQueue = gst_element_factory_make("queue", "videoQueue");
	videoAppSinkQueue = gst_element_factory_make("queue", "videoAppSinkQueue");

	//create h264 Encoder
	videoEncoder = gst_element_factory_make("x264enc", "h264Enc");

	//-----------------------------------------------------create audioPipe
elements-----------------------------------------------------------------------------

	//create Alsa Source
	alsaSrc = gst_element_factory_make("alsasrc", "alsaSrc");

	//create raw Audio Caps
	GstCaps* rawAudioCaps = gst_caps_new_simple("audio/x-raw-int", "channels",
G_TYPE_INT, audioChannels, "rate", G_TYPE_INT, audioSampleRate,
				"width", G_TYPE_INT, bitWidth, "depth", G_TYPE_INT, bitWidth,
"endianness", G_TYPE_INT, 1234, NULL);

	volume = gst_element_factory_make("volume", "volume");
	//create audio tee
	soundTee = gst_element_factory_make("tee", "audioTee");

	//create element to receive tee source #1 (audioSink)
	soundSink = gst_element_factory_make("appsink", "audioSink");

	//create element to receive tee source #2 (audioQueue)
	soundQueue = gst_element_factory_make("queue", "audioQueue");
	soundAppSinkQueue = gst_element_factory_make("queue", "soundAppSinkQueue");

	//create an audio encoder to use when ready.
	soundEncoder = gst_element_factory_make("ffenc_mp2", "audioEncoder");

	//-----------------------------------------------------Create Multiplexing
Elements-----------------------------------------------------------------------------

	//create multiplexer (currently avi)
	multiplexer =  gst_element_factory_make("avimux", "multiplexer");

	//create multifdsink
	multifdSink = gst_element_factory_make("filesink", "multiFDSink");
	g_object_set (G_OBJECT (multifdSink), "location", "/home/jonathan/test.avi"
, NULL);


//-----------------------------------------------------LINKERUP!----------------------------------------------------------------------------------------------

	//add all elements(except for audio encoder as it isn't used yet) to the
pipeline
	gst_bin_add_many (GST_BIN (h264Pipeline), v4l2Src, chanNameFilter,
osdMessageFilter, sessionTimerFilter, videoQueue, videoAppSinkQueue,
videoTee, videoSink, videoEncoder,
			alsaSrc, volume, soundTee, soundSink, soundQueue, soundAppSinkQueue,
multiplexer, multifdSink, NULL);

	//link video source with text overlay surfaces
	bool link = gst_element_link_filtered(v4l2Src, chanNameFilter,
rawVideoCaps);
	link = gst_element_link_filtered(chanNameFilter, osdMessageFilter,
rawVideoCaps);
	link = gst_element_link_filtered(osdMessageFilter, sessionTimerFilter,
rawVideoCaps);

	//link raw video with text to tee
	link = gst_element_link_filtered(sessionTimerFilter, videoTee,
rawVideoCaps);

	//link video Tee to both videoSink and videoEncoder. To do this, we must
request pads.

	//this pad is for the tee -> videoSink connection
	GstPad* videoSrcAppSinkPad = gst_element_get_request_pad(videoTee,
"src%d");

	//this pad is for the tee -> queue connection
	GstPad* videoSrcH264Pad = gst_element_get_request_pad(videoTee, "src%d");

	//get static pads for the sinks receiving the tee
	GstPad* videoSinkAppSinkPad = gst_element_get_static_pad(videoAppSinkQueue,
"sink");
	GstPad* videoSinkH264Pad = gst_element_get_static_pad(videoQueue, "sink");

	//link the pads
	GstPadLinkReturn padLink;
	padLink = gst_pad_link(videoSrcAppSinkPad, videoSinkAppSinkPad);
	padLink = gst_pad_link(videoSrcH264Pad, videoSinkH264Pad);

	gst_object_unref (GST_OBJECT (videoSrcAppSinkPad));
	gst_object_unref (GST_OBJECT (videoSrcH264Pad));
	gst_object_unref (GST_OBJECT (videoSinkAppSinkPad));
	gst_object_unref (GST_OBJECT (videoSinkH264Pad));

	link = gst_element_link_filtered(videoAppSinkQueue, videoSink,
rawVideoCaps);
	link = gst_element_link_filtered(videoQueue, videoEncoder, rawVideoCaps);

	//We are done with the video part of the pipe for now. Now we link the
sound elements together

	//link the alsa source to the volume element
	link = gst_element_link_filtered(alsaSrc, volume, rawAudioCaps);

	//link output from volume to soundTee
	link = gst_element_link_filtered(volume, soundTee, rawAudioCaps);

	//link audio Tee to both audioSink and multiplexer(when we do audio
encoding we can do this with audioEncoder instead. To do this, we must
request pads.

	//this pad is for the tee -> audioSink connection
	GstPad* audioSrcAppSinkPad = gst_element_get_request_pad(soundTee,
"src%d");

	//this pad is for the tee -> queue connection
	GstPad* audioSrcQueuePad = gst_element_get_request_pad(soundTee, "src%d");

	//get pads for the sinks receiving the tee
	GstPad* audioSinkAppSinkPad = gst_element_get_static_pad(soundAppSinkQueue,
"sink");
	GstPad* audioSinkQueuePad = gst_element_get_static_pad(soundQueue, "sink");

	//link the pads
	padLink = gst_pad_link(audioSrcAppSinkPad, audioSinkAppSinkPad);
	padLink = gst_pad_link(audioSrcQueuePad, audioSinkQueuePad);

	gst_object_unref (GST_OBJECT (audioSrcAppSinkPad));
	gst_object_unref (GST_OBJECT (audioSrcQueuePad));
	gst_object_unref (GST_OBJECT (audioSinkAppSinkPad));
	gst_object_unref (GST_OBJECT (audioSinkQueuePad));

	link = gst_element_link_filtered(soundAppSinkQueue, soundSink,
rawAudioCaps);

	//Now we multiplex the two parallel streams to do this, we must request
pads from the multiplexer.
	//this pad is for the audioQueue -> multiplex connection
	GstPad* audioSinkPad = gst_element_get_request_pad(multiplexer,
"audio_%d");

	//this pad is for the tee -> queue connection
	GstPad* videoSinkPad = gst_element_get_request_pad(multiplexer,
"video_%d");

	//get pads for the sources sending to multipexer
	GstPad* audioSrcPad = gst_element_get_static_pad(soundQueue, "src");
	GstPad* videoSrcPad = gst_element_get_static_pad(videoEncoder, "src");

	//do h264 caps negotiation
	//gst_pad_set_caps(videoSrcPad, h264VideoCaps);
	//gst_pad_set_caps(videoSinkPad, h264VideoCaps);

	//link the pads
	padLink = gst_pad_link(audioSrcPad, audioSinkPad);
	padLink = gst_pad_link(videoSrcPad, videoSinkPad);

	gst_object_unref (GST_OBJECT (audioSrcPad));
	gst_object_unref (GST_OBJECT (audioSinkPad));
	gst_object_unref (GST_OBJECT (videoSrcPad));
	gst_object_unref (GST_OBJECT (videoSinkPad));

	//finally we link the multiplexed stream to the multifdsink
	link = gst_element_link(multiplexer, multifdSink);

	gst_caps_unref(rawVideoCaps);
	gst_caps_unref(rawAudioCaps);
	gst_caps_unref(h264VideoCaps);

}

H264Stream::~H264Stream()
{
	for(std::map::iterator pair = streamHandles.begin(); pair !=
streamHandles.end(); pair++)
	{
		g_signal_emit_by_name(multifdSink, "remove", pair->first, NULL);
		delete pair->second;
	}

	streamHandles.clear();

	gst_element_set_state (h264Pipeline, GST_STATE_NULL);
	gst_object_unref (GST_OBJECT (h264Pipeline));
}

void H264Stream::Main()
{
	while(true)
	{
		PWaitAndSignal m(mutex);
		if(encoding)
		{
		  OSDSettings osd;

		  if(osd.getShowChanName())
		  {
			  g_object_set (G_OBJECT (chanNameFilter), "silent", false , NULL);
			  g_object_set (G_OBJECT (chanNameFilter), "text",
osd.getChanName().c_str() , NULL);
			  g_object_set (G_OBJECT (chanNameFilter), "halignment",
osd.getChanNameHAlign() , NULL);
			  g_object_set (G_OBJECT (chanNameFilter), "valignment",
osd.getChanNameVAlign() , NULL);
			  g_object_set (G_OBJECT (chanNameFilter), "wrap-mode",
osd.getChanNameWordWrapMode() , NULL);
			  g_object_set (G_OBJECT (chanNameFilter), "font-desc",
osd.getChanNameFont().c_str() , NULL);
			  g_object_set (G_OBJECT (chanNameFilter), "shaded-background",
osd.getChanNameShadow() , NULL);
		  }
		  else
		  {
			  g_object_set (G_OBJECT (chanNameFilter), "text", "" , NULL);
			  g_object_set (G_OBJECT (chanNameFilter), "silent", true , NULL);
		  }

		  if(osd.getShowOSDMessage())
		  {
			  g_object_set (G_OBJECT (osdMessageFilter), "silent", false , NULL);
			  g_object_set (G_OBJECT (osdMessageFilter), "text",
osd.getOSDMessage().c_str() , NULL);
			  g_object_set (G_OBJECT (osdMessageFilter), "halignment",
osd.getOSDMessageHAlign() , NULL);
			  g_object_set (G_OBJECT (osdMessageFilter), "valignment",
osd.getOSDMessageVAlign() , NULL);
			  g_object_set (G_OBJECT (osdMessageFilter), "wrap-mode",
osd.getOSDMessageWordWrapMode() , NULL);
			  g_object_set (G_OBJECT (osdMessageFilter), "font-desc",
osd.getOSDMessageFont().c_str() , NULL);
			  g_object_set (G_OBJECT (osdMessageFilter), "shaded-background",
osd.getOSDMessageShadow() , NULL);
		  }
		  else
		  {
			  g_object_set (G_OBJECT (osdMessageFilter), "text", "" , NULL);
			  g_object_set (G_OBJECT (osdMessageFilter), "silent", true , NULL);
		  }

		  if(osd.getShowSessionTimer())
		  {
			  g_object_set (G_OBJECT (sessionTimerFilter), "silent", false , NULL);
			  g_object_set (G_OBJECT (sessionTimerFilter), "text",
osd.getSessionTimer().c_str() , NULL);
			  g_object_set (G_OBJECT (sessionTimerFilter), "halignment",
osd.getSessionTimerHAlign() , NULL);
			  g_object_set (G_OBJECT (sessionTimerFilter), "valignment",
osd.getSessionTimerVAlign() , NULL);
			  g_object_set (G_OBJECT (sessionTimerFilter), "wrap-mode",
osd.getSessionTimerWordWrapMode() , NULL);
			  g_object_set (G_OBJECT (sessionTimerFilter), "font-desc",
osd.getSessionTimerFont().c_str() , NULL);
			  g_object_set (G_OBJECT (sessionTimerFilter), "shaded-background",
osd.getSessionTimerShadow() , NULL);

		  }
		  else
		  {
			  g_object_set (G_OBJECT (sessionTimerFilter), "text", "" , NULL);
			  g_object_set (G_OBJECT (sessionTimerFilter), "silent", true , NULL);
		  }

			this->Sleep(1000);
		}
	}
}

void H264Stream::RemoveStream(int handle)
{
	if(handle != -1)
	{
		g_signal_emit_by_name(multifdSink, "remove", handle, G_TYPE_NONE);
		delete streamHandles[handle];
		streamHandles.erase(handle);
	}

	if(!streamHandles.size())
		StopEncoding();
}

bool H264Stream::CheckAndBeginEncoding()
{
	if(!encoding)
	{
		GstStateChangeReturn stateRet;
		stateRet = gst_element_set_state (h264Pipeline, GST_STATE_PLAYING);

		GstState state;

		stateRet = gst_element_get_state(h264Pipeline, &state, NULL, GST_SECOND);
		encoding = true;
		this->Restart();
		return true;
	}
	else
		return true;
}

bool H264Stream::StopEncoding()
{
	gst_element_set_state (h264Pipeline, GST_STATE_READY);

	encoding = false;
	return true;
}

int H264Stream::AddStreamOutput(string ip, string port)
{
	PWaitAndSignal m(mutex);
	if(CheckAndBeginEncoding())
	{
		ClientSocket* socket = new ClientSocket(ip, atoi(port.c_str()));

		int fd = socket->getDescriptor();

		if(fd != -1)
		{
			//g_signal_emit_by_name(gst_app.multiFDSink, "add", fd, G_TYPE_NONE);
			streamHandles.insert(std::pair(fd, socket));
			return fd;
		}
	}
	return -1;
}

GstBuffer* H264Stream::GetAudioBuffer()
{
	PWaitAndSignal m(mutex);

	 if (soundSink != NULL) {
		 return gst_app_sink_pull_buffer (GST_APP_SINK (soundSink));
	 }
	 return NULL;
}

GstBuffer* H264Stream::GetVideoBuffer()
{
	PWaitAndSignal m(mutex);

	 if (videoSink != NULL) {
		 return gst_app_sink_pull_buffer (GST_APP_SINK (videoSink));
	 }
	 return NULL;
}

GstCaps* H264Stream::GetCurrentAudioCaps()
{
	PWaitAndSignal m(mutex);

	 if (soundSink != NULL) {
		 return gst_app_sink_get_caps (GST_APP_SINK (soundSink));
	 }
	 return NULL;
}

GstCaps* H264Stream::GetCurrentVideoCaps()
{
	PWaitAndSignal m(mutex);

	 if (videoSink != NULL) {
		 return gst_app_sink_get_caps (GST_APP_SINK (videoSink));
	 }
	 return NULL;
}

bool H264Stream::SetSessionAudioCaps(GstCaps* caps)
{
	 PWaitAndSignal m(mutex);

	 if (soundSink != NULL) {
		 gst_app_sink_set_caps (GST_APP_SINK (soundSink), caps);
		 gst_caps_unref(caps);
		 return true;
	 }
	 return false;
}

bool H264Stream::SetSessionVideoCaps(GstCaps* caps)
{
	 PWaitAndSignal m(mutex);

	 if (videoSink != NULL) {
		 gst_app_sink_set_caps (GST_APP_SINK (videoSink), caps);
		 gst_caps_unref(caps);
		 return true;
	 }
	 return false;
}

void H264Stream::SetVolume(gfloat value)
{
	g_object_set(G_OBJECT (volume), "volume", value, NULL);
}


Here is the class definition

#ifndef H264STREAM_H_
#define H264STREAM_H_

#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include "OSDSettings.h"
#include "AudioSettings.h"
#include "Communications.h"
#include "common.h"
#include "services.h"

class H264Stream : public PThread {
public:
	H264Stream();
	virtual ~H264Stream();
		/*
	 * The user is responsible for renegotiating caps if they are different
from the configuration file. i.e. after receiving H323 caps.
	 * The user is also responsible for unrefing this buffer.
	 */
	GstBuffer* GetAudioBuffer();

	/*
	 * Current caps in case renegotiation is neccessary (for h323 and SIP caps
negotiations)
	 */
	GstCaps* GetCurrentAudioCaps();

	/*
	 * Sets the caps for the Audio Buffer (for use by H323 and SIP server)
	 */
	bool SetSessionAudioCaps(GstCaps* caps);

	/*
	 * The user is responsible for renegotiating caps if they are different
from the configuration file. i.e. after receiving H323 caps.
	 * The user is also responsible for unrefing this buffer.
	 */
	GstBuffer* GetVideoBuffer();

	/*
	 * Current caps in case renegotiation is neccessary (for h323 and SIP caps
negotiations)
	 */
	GstCaps* GetCurrentVideoCaps();

	/*
	 * Sets the caps for the Video Buffer(for use by H323 and SIP server)
	 */
	bool SetSessionVideoCaps(GstCaps* caps);

	/*
	 * Sends output stream to host at port
	 */
	int AddStreamOutput(string host, string port);

	/*
	 * Remove file descriptor from output stream.
	 */
	void RemoveStream(int fd);

	void SetVolume(gfloat volume);

	bool CheckAndBeginEncoding();

protected:
	 virtual void Main();

private:
	Ekiga::ServiceCore core;
	bool StopEncoding();
	std::map streamHandles;
    unsigned size;
    unsigned height;
    unsigned width;
    unsigned fps;
    unsigned audioChannels;
    unsigned audioSampleRate;
    unsigned bitWidth;
    bool encoding;
    PMutex mutex;

    //pipeline
    GstElement *h264Pipeline;

    //Sound elements
    GstElement *alsaSrc, *volume, *soundTee, *soundSink, *soundAppSinkQueue,
*soundQueue, *soundEncoder;

    //video elements
    GstElement *v4l2Src, *chanNameFilter, *osdMessageFilter,
*sessionTimerFilter, *videoTee, *videoSink, *videoAppSinkQueue, *videoQueue,
*videoEncoder;

    //multiplexed elements
    GstElement *multiplexer, *multifdSink;
};

#endif /* H264STREAM_H_ */

-- 
View this message in context: http://gstreamer-devel.966125.n4.nabble.com/Trouble-using-x264enc-with-a-tee-tp3067583p3067583.html
Sent from the GStreamer-devel mailing list archive at Nabble.com.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20101201/19f33888/attachment.htm>


More information about the gstreamer-devel mailing list