need help to understand the pipeline

Peter Maersk-Moller pmaersk at gmail.com
Fri Jul 16 10:52:38 UTC 2021


Hi niXman

You are in fact not retrieving JPEG encoded frames from your camera.
Apparently your camera also supports a raw video format.

Most likely this raw format is some sort of YUV format. You can find out by
adding '-v' to gst-launch-1.0 and look for the format accepted by the
videoconvert sink pad.
There are tools to list your camera's capabilities (V4L2 tool) and Google
is your friend. Getting the right pixel format can be a path to better
quality.

In many cases, if cameras supports raw pixels formats, you will for quality
select a raw format. JPEG will introduce artifacts that are mostly
unwanted. You will use JPEG in cases where the desired picture geometry and
framerate for a given raw format, exceeds the communications channel by
which the computer and camera communicates (often USB with a maximum
480Mbps - in reality lower).

So to answer your question, your pipeline (most likely) converts from one
YUV format (o not at all) to (most likely) I420 (a consumer YUV raw format)
accepted by the encoder. Your encoder encodes H.264 video streams, most
likely using the Baseline profile, but the '-v' mentioned earlier. To get
your encoder to select a high profile .... such as 'High Profile" or just
"Main profile", you have to set format/parameters between your encoder and
your parser. If you want to encode using higher quality input, assuming
your camera supports better raw formats than I420 or NV12 and assuming your
v4l2h264encoder supports better input formats than I420, you can force
before the encoder a format like Y444 or Y42B (Y422) . Your choice.

So no JPEG and you're encoding (probably baseline) H.264.

One tip though. Often a queue between input source and videoconvert is a
good thing. Also a queue between videoconvert and encoder is often a good
thing. And lastly, a queue between encoder and additional processing such
as parsing and file or network operation is often a good thing. So is a
queue on input to a muxer, if you are muxing audio and video together (here
your are not).

Best regards
Peter Maersk-Moller




On Fri, Jul 16, 2021 at 10:43 AM niXman via gstreamer-devel <
gstreamer-devel at lists.freedesktop.org> wrote:

>
> hello!
>
> I am using a camera that is supported by the v4l2 subsystem. This camera
> uses the Omnivision OV5640 chip, which according to the documentation
> encodes frames in JPEG format.
>
> To be able to save the stream from this camera the following pipeline is
> offered:
>
> gst-launch-1.0 -e \
>      v4l2src device=/dev/video0 \
>      ! video/x-raw,width=1280,height=800,framerate=10/1 \
>      ! videoconvert \
>      ! v4l2h264enc \
>      ! h264parse \
>      ! mp4mux \
>      ! filesink location=/home/linaro/video.mp4
>
> my question is that since the camera is gives encoded frames, what kind
> of encoding does this pipeline perform?
> v4l2h264enc is especially interested.
>
>
> best!
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20210716/92362293/attachment.htm>


More information about the gstreamer-devel mailing list