Pipeline Optimization

Nicolas Dufresne nicolas at ndufresne.ca
Tue Apr 19 14:45:14 UTC 2022

Le mardi 19 avril 2022 à 01:20 -0500, Matt Clark a écrit :
> Thanks for the tip! I was wondering about those, but like I said, bit of a
> newbie. Here is my new graph:
> debug_session(sans extra converts).png
> Seems to be working still which is always a good thing! 
> I'm all for more CPU optimizations (it's still spinning up almost 40 threads,

I count 9 streaming threads (thread induced by the pipeline design). 

- 3 threads for the 3 appsrc
- 3 leaky queues
- 1 compositor
- 1 queue before hlssink (miss-placed btw, should be right after the tee)
- 1 queue inside hlssink

GIO will of course add couple of more threads, and some stalled thread will
appear since all this is using thread pools. But threads that are never woken up
are not a problem, it will use a bit (~2M but depends on the OS) of RAM each.
Overall, the thread situation does not seems dramatic. If the compositor could
be leaky, that would save you 3 threads.

Memory wise you can do better for sure. All the queues can be configured with a
smaller maximum size, most of them are set to default from what I see. You can
also work on your encoder configuration. At the moment, it will gather around 32
frames for observation and compression optimization. This likely gives great
quality, but might be overkill. Be aware that appsrc also have a internal queue,
which capacity can be configured. Configuring queue capacity greatly improves
the memory usage.

> not sure if that's a lot or normal for this, honestly), but I would also love
> some memory optimizations as well! After those changes each stream is taking
> up about 850M of RAM while running. Again this may be normal for the task, but
> a) seems like a lot to me and b) I have no frame of reference.
> Thanks again, Nicolas!
> On Mon, Apr 18, 2022 at 8:14 AM Nicolas Dufresne <nicolas at ndufresne.ca> wrote:
> > Le dimanche 17 avril 2022 à 03:29 -0500, Matt Clark via gstreamer-devel a
> > écrit :
> > > I've gotten my project mostly to the stable point of working how I expect
> > > it,
> > > however I can't help but feel that it's nowhere near optimal. I have made
> > > it
> > > work and now I wish to make it right. Any insight be it pointers or
> > > instructions would be appreciated, as this is my first service/application
> > > using gstreamer and I'm still very green with it. 
> > 
> > Just notice from the graph one low hanging fruit. You have 3 color
> > conversion
> > points, 1 before freeze, one after and one inside compositor. The output of
> > compositor is I420, you can greatly optimize your pipeline by adding a caps
> > filter to force conversion before the image freeze. This way, you convert
> > the
> > input to I420 only once. Other similar optimization in relation to the usage
> > of
> > imagefreeze could happen.
> > 
> > > The basic explanation of the system is that it queries a variable number
> > > of
> > > web endpoints for dynamically created pngs and then composes those
> > > together
> > > into an HLS stream that's then used by a single client. 
> > > Here is a PNG of the pipeline graph (I'll also attach the raw SVG as well
> > > in
> > > case you want to dig into it):
> > > debug_session.png
> > > 
> > > TL;DR: Above is my pipeline, please help me make it the best it can be!
> > > Thanks to any and all in advance!
> > > -Matt
> > 

More information about the gstreamer-devel mailing list