need help to understand the pipeline

i.nixman at autistici.org i.nixman at autistici.org
Sat Jul 17 10:19:31 UTC 2021


Hello Maersk-Moller!

On 2021-07-16 15:28, Peter Maersk-Moller via gstreamer-devel wrote:
> Well I can't see how you full system is put together, but I can see 
> than
> one of the link you send is using CSI used to take in a camera feed
> somewhat raw. And the other link lists it supports the following 
> formats:
> 
>    -
> 
>    support for output formats: *RAW RGB, RGB565/555/444, CCIR656,
>    YUV422/420, YCbCr422*, and compression

please tell me which page?

and can you answer my previous question please?:
if the camera actually sends uncompressed YUV, then each frame must be 
5MP * 12 = 60Mbit per frame? really?

is there a way how i can save only one YUV frame without any processing? 
I just want to make sure.

> Plenty of support for raw formats. Furthermore your output from 
> GStreamer
> suggests your output (source of v4l2src) outputs YUY2. So the v4l2src
> modules gets data and outputs raw video in the format YUY2. This is 
> similar
> in quality to a Y422 og Y42B. This is better than I420 as it has twice 
> as
> many bytes for colour information. While I420 is often for consumer
> products, your format is what is usually used for Prosumer. A 
> compromise
> between  cosumer and professionel level. This is good so what is the
> problem? Why do you think you have a problem?
> 
> That said, apparently the video encoder wants a nv12  comparable to 
> I420
> (consumer level). Videoconvert converts and encodes the result. If you 
> want
> better result, check if the encoder can support higher quality inputs. 
> But
> that's outside the scope of your question. Fact is, you are not using
> JPEG/MJPEG and that is often a good thing depending oin what you want. 
> So
> what is the problem?

long story in short:
for my drone I bought a SOM based on Snapdragon 660 with Debian.
according to the manufacturer this SOM has an h264 hardware encoder.
I connected and configured this camera: [1]
everything works, the load on the CPU is ~30%.
after that I wanted to add a thermal camera.
I chose this camera: [2]

work with the camera is done through the SDK. I wrote an encoder program 
that reads 320x240 frames in ARGB from the camera, converts them to 
YUV422 and uses libx264 for encoding.

I tested this program on a rather weak Intel processor N3710 and the 
program consumed ~30%.
but when I ran this program on the SOM I realized that the program could 
not cope even with encoding five frames per second!

after that I started a long and tedious communication with the support.
the support convinces me that the hardware encoder is available only for 
cameras supported by v4l2 subsystem.

in addition, in the SOM specification I do not see any separate 
chip/module that would be a hardware encoder.

the SOM has GPU and DSP. in theory, the most resource-demanding part of 
the encoder (working with matrices) can be implemented both on the GPU 
and on the DSP.
but all my requests for information about the encoder and the API that I 
can use are simply ignored.
they give the only answer: your camera is not supported by v4l2 
subsystem.

this is what brought me here :)
I don't understand how the hardware encoder can only be accessed by 
devices supported by the v4l2 subsystem ?!

and I don't understand why SOM can cope with encoding 5 and 8 megapixel 
frames but can't cope with 320x240 encoding?

and I thought that other cameras were just sends compressed frames. 
(JPEG / MJPG)
in this I want to understand and make sure.


thank you!

best!


[1] 
https://www.arducam.com/product/b0196arducam-8mp-1080p-usb-camera-module-1-4-cmos-imx219-mini-uvc-usb2-0-webcam-board-with-1-64ft-0-5m-usb-cable-for-windows-linux-android-and-mac-os/

[2] 
https://www.digikey.com/en/products/detail/seek-thermal/S314SPX/13573823


More information about the gstreamer-devel mailing list