Compositor sink stream synchronisation

Terry Barnaby terry1 at
Thu Nov 17 15:03:34 UTC 2022

Hi Anyone who knows details on how Gstreamer works,

I am digging down into the gstreamer code to try and understand what is 
going on with bu hardware composer issue where after pause/unpause the 
imxcompositor_g2d ignores frames from the main camera stream and just 
outputs an appsrc's non live stream.

 From what I can see the NXP imxcompositor_g2d uses the 
gst_aggregator_aggregate_func() in gstaggregator.c to get the sink input 
buffers synchronised for processing.

When I see our issue (the composers output has no frames from the main 
live camera feed although it starts stuttering and eventually resumes 
after a minute or so), the gst_aggregator_pad_skip_buffers() function 
looks like it is dropping the camera frame buffers.

In gst_aggregator_check_pads_ready() a comment states: "In live mode, 
having a single pad with buffers is enough to generate a start time from 
it. In non-live mode all pads need to have a buffer". So it appears 
that, I presume, output pad start times can be set from in my case the 
non live sources frame buffers.

In gst_aggregator_wait_and_check() I see it gets the upstream latency 
and presumably uses this in setting the output buffers time window 
somehow. In my system this latency value is 33.33ms so one camera frame 
at 30fps even though there is a software VideoFilter (beamdeinterlace) 
following the v4l2src.

My feeling is that with my beamdeinterlace Element in line with the 
camera feed (which will have a small processing latency) the 
gst_aggregator_aggregate_func() function can get into a timing state 
with the live camera frame buffers and the non live appsrc buffers such 
that the composer output buffers end up in sync with the appsrc buffers 
and with a very small latency window of one live frame, ends up dropping 
the main camera stream frame buffers. Certainly if I double the latency 
value in gst_aggregator_wait_and_check() I don't see the issue.

1. Does the above sound feasible ?

2. How should this issue be handled, for example should my 
beamdeinterlace element set a min_latency of maybe 1/2 or 1 frame 
somehow to extend the latency so the output frame window is larger and 
if so how do I do that ?

3. In fact should all VideoFilter software processing elements set some 
sort of latency based on their expected processing time of perhaps one 
Frame Buffers time by default ?

4. Should the gst_aggregator_check_pads_ready() function really allow 
any sink streams Buffer to set the start time of the output buffer when 
one stream is live and the other is non live ?

5. When gst_aggregator_pad_skip_buffers() drops a frame buffer, it does 
not send a QOS lost buffer event. Shouldn't it do this ?

6. Also it looks like v4l2src only uses two frame buffers by default, 
which seems much too small to me to prevent frame loss. Is there any 
easy way to increase these a touch (ideally not by changing the source 
code defines) ?

On 16/11/2022 15:45, Terry Barnaby via gstreamer-devel wrote:
> We are having a problem with NXP hardware based compositor's where 
> occasionally (after pause/unpause) the pipeline gets into a state 
> where the output video Buffers only contain frames from one of the 
> compositors input sinks. Digging into the code it looks like the 
> compositor (which uses GstVideoAggregator) is dropping buffers from 
> the main live camera stream because its Buffer timestamps are before 
> the compositors current output Buffer timestamps. So the compositor 
> just uses the last camera Buffer it had often from 10's of seconds ago.
> We have two Video frame sources a v4l2src through one of our own 
> simple video deinterlacer elements to a compositor and an overlay 
> stream coming from an appsrc. The CPU is only lightly used.
> We have seen a harder issue with a different NXP hardware based 
> compositor (no frames at all solid from the camera source) and seen 
> some interesting frame drops with the standard software compositor.
> Our problem may be due to our deinterlacer element somehow (latency ?) 
> but it is pretty simple just a VideoFilter (GstVideoFilterClass) 
> implementing a transform_frame function.
> My question is how in general is gstreamer compositor frame 
> synchronisation supposed to work as I can't find any info on this ?
> Our v4l2src is a live video source at 30 fps and our appsrc is a non 
> live source at 5 fps (although I have tried 30fps). I would have 
> assumed that the compositor's output frames should be synchronised to 
> the cameras live stream predominantly but this does not seem to be the 
> case. It does output 30 fps but is obviously not synchronised in time 
> to the cameras Buffer stream.
> Terry

More information about the gstreamer-devel mailing list