need help to understand the pipeline

Peter Maersk-Moller pmaersk at gmail.com
Sat Jul 17 11:12:43 UTC 2021


See answers inline.

On Sat, Jul 17, 2021 at 12:19 PM niXman via gstreamer-devel <
gstreamer-devel at lists.freedesktop.org> wrote:

>
> Hello Maersk-Moller!
>
> On 2021-07-16 15:28, Peter Maersk-Moller via gstreamer-devel wrote:
> > Well I can't see how you full system is put together, but I can see
> > than
> > one of the link you send is using CSI used to take in a camera feed
> > somewhat raw. And the other link lists it supports the following
> > formats:
> >
> >    -
> >
> >    support for output formats: *RAW RGB, RGB565/555/444, CCIR656,
> >    YUV422/420, YCbCr422*, and compression
>
> please tell me which page?
>

Page 3: support for output formats: RAW RGB, RGB565/555/444, CCIR656,
YUV422/420, YCbCr422, and compression
Page 4: output formats: 8-/10-bit RGB RAW output



>
> and can you answer my previous question please?:
> if the camera actually sends uncompressed YUV, then each frame must be
> 5MP * 12 = 60Mbit per frame? really?
>

Where do you get this idea from. Geometry is 1280x800 and bytes per pixel
for YUYV/YUY2/Y42B is 2.

1280x800x2 = 2048000 B/frame = 16384000 b/frame where B=byte, b=bits,


>
> is there a way how i can save only one YUV frame without any processing?
> I just want to make sure.
>

One way is listed below. Otherways is to use multifilesink of Gstreamer etc.

gst-launch-1.0 -e -v \
     v4l2src device=/dev/video0 ! \
     video/x-raw,format=Y42B,width=1280,height=800,framerate=10/1 ! \
     fdsink fd=3 3>&1 1>&2 | od  of=MyFile.yuv bs=16384000 count=1

Note that it is a raw format with no container and no meta data. just raw
YUV in the Y42B format. You can pick others formats as well. If you want to
have a container format like AVI or MP4 you need to use a muxer module in
GStreamer, but then you need another method to chop up data.


>
> > Plenty of support for raw formats. Furthermore your output from
> > GStreamer
> > suggests your output (source of v4l2src) outputs YUY2. So the v4l2src
> > modules gets data and outputs raw video in the format YUY2. This is
> > similar
> > in quality to a Y422 og Y42B. This is better than I420 as it has twice
> > as
> > many bytes for colour information. While I420 is often for consumer
> > products, your format is what is usually used for Prosumer. A
> > compromise
> > between  cosumer and professionel level. This is good so what is the
> > problem? Why do you think you have a problem?
> >
> > That said, apparently the video encoder wants a nv12  comparable to
> > I420
> > (consumer level). Videoconvert converts and encodes the result. If you
> > want
> > better result, check if the encoder can support higher quality inputs.
> > But
> > that's outside the scope of your question. Fact is, you are not using
> > JPEG/MJPEG and that is often a good thing depending oin what you want.
> > So
> > what is the problem?
>
> long story in short:
> for my drone I bought a SOM based on Snapdragon 660 with Debian.
> according to the manufacturer this SOM has an h264 hardware encoder.
> I connected and configured this camera: [1]
> everything works, the load on the CPU is ~30%.
> after that I wanted to add a thermal camera.
> I chose this camera: [2]
>
> work with the camera is done through the SDK. I wrote an encoder program
> that reads 320x240 frames in ARGB from the camera, converts them to
> YUV422 and uses libx264 for encoding.
>
> I tested this program on a rather weak Intel processor N3710 and the
> program consumed ~30%.
> but when I ran this program on the SOM I realized that the program could
> not cope even with encoding five frames per second!
>

Most Arm processors have a really weak software encoding performance
compared to Intel platforms.


> after that I started a long and tedious communication with the support.
> the support convinces me that the hardware encoder is available only for
> cameras supported by v4l2 subsystem.
>
> in addition, in the SOM specification I do not see any separate
> chip/module that would be a hardware encoder.
>

Are you trying to encode two separate streams or are you wanting to mix to
streams into one stream and encode that?


>
> the SOM has GPU and DSP. in theory, the most resource-demanding part of
> the encoder (working with matrices) can be implemented both on the GPU
> and on the DSP.
> but all my requests for information about the encoder and the API that I
> can use are simply ignored.
> they give the only answer: your camera is not supported by v4l2
> subsystem.
>

I don't know your system. Assuming it is Linux, how much of the kernel can
you compile/modify?
I don't understand your challenges as I don't know your limitations. If you
can read images from a camera into a Linux compuer, you can always loop it
using a V4L2 loopback interfacew making it into a V4L2 device. But usually
normal camera using USB is available through V4L2 interfaces natively?
Linux V4L2 has a loopback interface where you can loo

>
> this is what brought me here :)
> I don't understand how the hardware encoder can only be accessed by
> devices supported by the v4l2 subsystem ?!
>

Well if you have documentation and a working compiler, you can do anything
you brain can think of. That said and assuming your initial pipeline works
- I don't think you answered if it does, then I am guessing you get access
to the systems hardware encoder using the v4l2h264enc module of GStreamer.
If that is true, then what they are telling you is that to use the hardware
encode, you must use the V4L2 subsystem. And you are. You are using the
GStreamer V4L2 encoder module called v4l2h264enc. It does not mean that you
have to use a camera that uses V4L2, although in many cases you will. Do
you understand the difference? You apparently have to use the V4L2
subsystem to access the hardware encoder and send raw video data to it and
retrieve encoded video from it.


> and I don't understand why SOM can cope with encoding 5 and 8 megapixel
> frames but can't cope with 320x240 encoding?
>

It can .... well there maybe some limitations of choices of geometries, but
asides from that, it can.

What are you trying that works (pipeline examples) and what doesn't work
(pipeline examples)?



>
> and I thought that other cameras were just sends compressed frames.
> (JPEG / MJPG)
> in this I want to understand and make sure.
>

Many cameras can, including one of the camera you listed. But in the case
of you original pipeline, you are not using JPEG, which is good.

P

>
>
> thank you!
>
> best!
>
>
> [1]
>
> https://www.arducam.com/product/b0196arducam-8mp-1080p-usb-camera-module-1-4-cmos-imx219-mini-uvc-usb2-0-webcam-board-with-1-64ft-0-5m-usb-cable-for-windows-linux-android-and-mac-os/
>
> [2]
> https://www.digikey.com/en/products/detail/seek-thermal/S314SPX/13573823
> _______________________________________________
> gstreamer-devel mailing list
> gstreamer-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/gstreamer-devel
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/gstreamer-devel/attachments/20210717/18627157/attachment-0001.htm>


More information about the gstreamer-devel mailing list