combine demux outputs into a single video sink

omargr90 ogonz117 at fiu.edu
Fri Jun 7 19:48:14 UTC 2019


Hello All,

I am new to the forum, I hope my question is not too obvious. I try looking
everywhere before posting this question, but I guess this question its too
specific and I was not able to find and answer. 

I am trying to create a pipeline like the one in the picture below:
<http://gstreamer-devel.966125.n4.nabble.com/file/t378934/gst_pipeline.png> 

The way I achieved this pipeline with gst-launch is:
gst-launch-1.0 -v filesrc location=/dos/sample.mpg ! tsparse ! tsdemux
name=dmux    \
dmux. ! queue ! h264parse ! imxvpudec ! textoverlay name=overlay !
imxipuvideosink  \
dmux. ! queue ! meta/x-klv ! klvtotext ! overlay.text_sink                       

The purpose of this pipeline is to:
1- receive mpeg-ts packets with h264 video and klv data muxed together, and
2- Output the h264 video with the translated klv on top of the video feed.

gst log when running this pipeline:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
/GstPipeline:pipeline0/MpegTSParse2:mpegtsparse2-0.GstPad:src: caps =
video/mpegts, systemstream=(boolean)true, packetsize=(int)188
/GstPipeline:pipeline0/GstTSDemux:dmux.GstPad:sink: caps = video/mpegts,
systemstream=(boolean)true, packetsize=(int)188
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:sink: caps = meta/x-klv,
parsed=(boolean)true
/GstPipeline:pipeline0/GstQueue:queue1.GstPad:src: caps = meta/x-klv,
parsed=(boolean)true
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:src: caps =
meta/x-klv, parsed=(boolean)true
/GstPipeline:pipeline0/GstKlvToText:klvtotext0.GstPad:klv_sink: caps =
meta/x-klv, parsed=(boolean)true
/GstPipeline:pipeline0/GstCapsFilter:capsfilter0.GstPad:sink: caps =
meta/x-klv, parsed=(boolean)true
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:sink: caps = video/x-h264,
stream-format=(string)byte-stream, alignment=(string)nal
/GstPipeline:pipeline0/GstQueue:queue0.GstPad:src: caps = video/x-h264,
stream-format=(string)byte-stream, alignment=(string)nal
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:sink: caps =
video/x-h264, stream-format=(string)byte-stream, alignment=(string)nal
/GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps =
video/x-h264, stream-format=(string)byte-stream, alignment=(string)au,
width=(int)1280, height=(int)720, framerate=(fraction)0/1,
interlace-mode=(string)progressive, parsed=(boolean)true,
profile=(string)main, level=(string)4.1
[INFO]  bitstreamMode 1, chromaInterleave 0, mapType 0, tiled2LinearEnable 0
/GstPipeline:pipeline0/GstImxVpuDecoder:imxvpudecoder0.GstPad:sink: caps =
video/x-h264, stream-format=(string)byte-stream, alignment=(string)au,
width=(int)1280, height=(int)720, framerate=(fraction)0/1,
interlace-mode=(string)progressive, parsed=(boolean)true,
profile=(string)main, level=(string)4.1
/GstPipeline:pipeline0/GstImxVpuDecoder:imxvpudecoder0.GstPad:src: caps =
video/x-raw, format=(string)I420, width=(int)1280, height=(int)720,
interlace-mode=(string)progressive, multiview-mode=(string)mono,
multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono,
pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2,
colorimetry=(string)bt709, framerate=(fraction)0/1
/GstPipeline:pipeline0/GstTextOverlay:overlay.GstPad:src: caps =
video/x-raw, format=(string)I420, width=(int)1280, height=(int)720,
interlace-mode=(string)progressive, multiview-mode=(string)mono,
multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono,
pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2,
colorimetry=(string)bt709, framerate=(fraction)0/1
/GstPipeline:pipeline0/GstImxIpuVideoSink:imxipuvideosink0.GstPad:sink: caps
= video/x-raw, format=(string)I420, width=(int)1280, height=(int)720,
interlace-mode=(string)progressive, multiview-mode=(string)mono,
multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono,
pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2,
colorimetry=(string)bt709, framerate=(fraction)0/1
/GstPipeline:pipeline0/GstTextOverlay:overlay.GstPad:video_sink: caps =
video/x-raw, format=(string)I420, width=(int)1280, height=(int)720,
interlace-mode=(string)progressive, multiview-mode=(string)mono,
multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono,
pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2,
colorimetry=(string)bt709, framerate=(fraction)0/1

When I run the pipeline it seems to be fine, but at the moment of linking
the klvtotext plugin to the overlay sink, it just can't do it. I think the
issue is that each demux queue needs an independent sink.

How can I make this work? Any Ideas?

 



--
Sent from: http://gstreamer-devel.966125.n4.nabble.com/


More information about the gstreamer-devel mailing list