How do gstreamer interfaces with H264 hardware encoders and creates videos ?

Rand Graham rand.graham at zenith.com
Thu Apr 26 14:34:40 UTC 2018


Hello,

I just wanted to comment on this pipeline:

sudo gst-launch-1.0 -e v4l2src device=/dev/video3 ! videoconvert! video/x-raw,width=1920,height=1080 ! tee name=t ! queue ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4 t. ! queue ! multifilesink location=file%1d.raw

This is a rather complicated pipeline with a couple potential bottlenecks. The main bottleneck I would worry about is the file system.

Beyond potential bottleneck issues, I would say the following.


1)      I don’t know exactly what you are expecting as far as what you call raw video. The pipeline has a videoconvert element in it. To me this would mean you would not be capturing raw video but rather the output of videoconvert.

2)      If I understand the pipeline, the video saved by the multifilesink is not using the h264 encoder. This is because the tee comes before the h264 encoder. The video in the filesink element is getting the output of the h264 encoder. Your original email was asking about whether or not gstreamer was using the hardware encoder. Based on the pipeline above, it appears to me that in one case gstreamer would use the hardware encoder and in the other case it would not.


Regards,
Rand

From: gstreamer-devel [mailto:gstreamer-devel-bounces at lists.freedesktop.org] On Behalf Of simon.zz at yahoo.com
Sent: Wednesday, April 25, 2018 5:11 PM
To: Discussion of the Development of and With GStreamer <gstreamer-devel at lists.freedesktop.org>
Subject: Re: RE: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?

Hello Rand,

Yes I recompiled the libs as 96boards suggests.
The pipeline I use is:

gst-launch-1.0 -v -e v4l2src device=/dev/video3 ! videoconvert! video/x-raw,width=1920,height=1080 ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4

and this pipeline creates a quiet good video..
In this case the output is:

Otherwise, If I use the following pipeline to extract raw frames while I am recording the video

sudo gst-launch-1.0 -e v4l2src device=/dev/video3 ! videoconvert! video/x-raw,width=1920,height=1080 ! tee name=t ! queue ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4 t. ! queue ! multifilesink location=file%1d.raw

the resulting video is quiet crappy, not all frames are being recorded and the sequence remains locked on a single frame for a few seconds. It's obvious that the extracting frames task is not good while recording a video. If gstreamer should be using the h264 encoder, why it gives this problems in this case ?
I doubt about it because my C/C++ code doesn't generate a bad video as in this case.

>> It is then using a parser and a muxer and a filesink to create an mp4 file.
This is an interesting point. In this case gstreamer should use a mp4 library using the h264 encoded data ?
This makes sense to me.

Simon
Il mercoledì 25 aprile 2018, 23:49:53 CEST, Rand Graham <rand.graham at zenith.com<mailto:rand.graham at zenith.com>> ha scritto:



Hello,



Did you recompile according to the release notes? Are you using the pipeline shown in the release notes?



To know what is being done by gstreamer, you should copy paste the exact pipeline you are using.



The release notes show this pipeline



gst-launch-1.0 -e v4l2src device=/dev/video3 ! video/x-raw,format=NV12,width=1280,height=960 ! v4l2h264enc extra-controls="controls,h264_profile=4,video_bitrate=2000000;" ! h264parse ! mp4mux ! filesink location=video.mp4



It looks like this pipeline is using v4l2h264enc to do the h.264 encoding.



It is then using a parser and a muxer and a filesink to create an mp4 file. What this does is use an mp4 container that contains an h264 video track.



It looks like the h264 encoder takes some parameters. You may be able to get better video quality by adjusting the parameters of the h264 encoder. For example, there is typically a “high” setting that can be used for h264 quality. You might also try increasing the bitrate to see if that improves quality. (The height and width dimensions seem odd to me. I would expect something like 1280x720 or 1920x1080)



Regards,

Rand







From: gstreamer-devel [mailto:gstreamer-devel-bounces at lists.freedesktop.org] On Behalf Of simon.zz at yahoo.com<mailto:simon.zz at yahoo.com>
Sent: Wednesday, April 25, 2018 4:33 PM
To: Discussion of the Development of and With GStreamer <gstreamer-devel at lists.freedesktop.org<mailto:gstreamer-devel at lists.freedesktop.org>>
Subject: Re: RE: How do gstreamer interfaces with H264 hardware encoders and creates videos ?



Hello Rand,



You are right. The board is a Dragonboard 410c by 96boards.



https://developer.qualcomm.com/hardware/snapdragon-410/tools



96boards in their release notes



http://releases.linaro.org/96boards/dragonboard410c/linaro/debian/latest/



write that the gstreamer pipeline uses the video encoder.

But as I said before, I noticed notable differences in video results, which make me doubt that gstreamer really uses the encoder..



The C/C++ code I am using is based on this one:



stanimir.varbanov/v4l2-decode.git - Unnamed repository<https://git.linaro.org/people/stanimir.varbanov/v4l2-decode.git/tree>




stanimir.varbanov/v4l2-decode.git - Unnamed repository






I basically changed the Varbanov's code to catch the camera frames and feed the encoder, my code works in the sense I can record an h264 video (not mp4 as gstreamer does), but I noticed the results I commented in my previous mail.



So what additional processing gstreamer applies to the video hardware encoding ?



Regards,

Simon



Il mercoledì 25 aprile 2018, 22:49:38 CEST, Rand Graham <rand.graham at zenith.com<mailto:rand.graham at zenith.com>> ha scritto:





Hello,



It might help if you mention which embedded board you are using.



In order to use custom hardware from a vendor such as nVidia, you would compile gstreamer plugins provided by the vendor and then specify them in your pipeline.



Regards,

Rand



From: gstreamer-devel [mailto:gstreamer-devel-bounces at lists.freedesktop.org] On Behalf Of simon.zz at yahoo.com<mailto:simon.zz at yahoo.com>
Sent: Wednesday, April 25, 2018 1:01 PM
To: gstreamer-devel at lists.freedesktop.org<mailto:gstreamer-devel at lists.freedesktop.org>
Subject: How do gstreamer interfaces with H264 hardware encoders and creates videos ?



Hello,



I am using an embedded board which has an hardware H264 encoder and I am testing video generation both with gst-launch and with a C++ code wrote by my self.



Comparing my code results to the gst-launch results, it is clear and obvious that gstreamer applies additional processing compared to what I get from the hardware encoder buffer.

The first obvious processing is that it generates an mp4 video, while I can only generate an h264 video, but I am not using additional mp4 demux in my code.



For example, the gst-launch resulting video image's quality it's quiet better, the video has the correct framerate rather than the video I obtain which results slightly "accelerated", and in addtition, the time-stap (minutes - seconds) is present while in the video I obtain from my C++ code it's not.



So I suspect that gstreamer doesn't use the hardware encoder.

How can I be sure that gstreamer uses the hardware encoder instead of a h264 software library and how can I know in real time what are the V4L2 settings that gstreamer applies to the encoder ?



Thanks.

Regards,

Simon




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20180426/a17c1ba6/attachment-0001.html>


More information about the gstreamer-devel mailing list