Using more threads to increase bandwidth
Chris Tapp
opensource at keylevel.com
Tue Nov 19 00:08:42 PST 2013
On 19 Nov 2013, at 06:28, Edward Hervey wrote:
> On Mon, 2013-11-18 at 21:47 +0000, Chris Tapp wrote:
>> I've got an app using GStreamer 0.10 which takes arbitrary video streams and makes images available for rendering in GLES. I'm using two pipelines, both created using parse_launch.
>>
>> 1) playbin2 with video going to a fakesink;
>> 2) appsrc ! ffmpefcolorspace ! videoscale ! fakesink.
>>
>> The first pipleline plays the video in real time so that audio playback works.
>>
>> The second is then used to convert the latest frame in the fakesink to the video format/size requested by GLES.
>>
>> This is working as I want, but it gives lower frame rates than I would expect at times. I'm running on a dual core Atom with hyper-threading, so I should be able to have four threads running at a time.
>>
>> The ppipeline created by playbin2 will have queues in it and so will use multiple threads. I tried adding some to the second pipeline:
>>
>> appsrc ! queue2 ! ffmpefcolorspace ! queue2 ! videoscale ! fakesink
>
> Use queue (and not queue2 which is for buffering and not just for
> thread decoupling).
Thanks, I spotted that one shortly after posting ;-)
>> but this hasn't pushed the peak cpu use up from it's 'normal' level of
>> about 265%. I was hoping that this would help as I thought keeping 6
>> buffers in the pipeline would mean that the color space conversion and
>> scaling would then run in different threads.
>
> Unless the elements have multi-buffer threading (like some
> libav/ffmpeg decoders), the only (and normal) thing you have done above
> is split your pipeline into 3 threads:
> appsrc ! queue
> queue ! ffmpegcolorspace ! queue
> queue ! videoscale ! fakesink
>
> The 265% (i.e. close to 3 times full core) you are seeing is therefore
> normal.
OK. I was expecting a bit more as there's non-gstreamer stuff going on as well (see below)...
>> What should I be doing to maximize CPU usage?
>
> With the current elements ... not much from the GStreamer side. You
> could create 2 such pipelines in your application and load-balance your
> input data across them (i.e. 50% to one pipeline, 50% to another).
>
> One thing which one *could* investigate (for 1.x) is to add support in
> basetransform for parallel processing. For subclasses which are not
> time-dependent (i.e. they don't depend on previous/future data for the
> processing, nor depend on controlled properties) we could have parallel
> calls to ::transform()/::transform_ip().
> Might be tricky to figure out automatic number of threads to start and
> it would introduce latency (so you wouldn't want to use it with live
> pipelines), but it would be an interesting feature.
>
> Finally, not that in 1.x (and I thought 0.10, but could be wrong), you
> could do the scaling, and maybe even the colorspace conversion, in GLES
> itself :)
Thanks. That's on the list for the longer-term. The platform I'm working on is only just moving to 1.x and I'll bring this in when I switch.
I've actually got it working as I was expecting now. I'm not sure exactly what was going on, but removing a 'queue' element (which I didn't mention before) means it's all working as expected with the same level of CPU usage (so a double win!). Basically, the full video-sink for playbin2 was:
video-sink="queue leaky=0 max-size-buffers=1 ! fakesink sync=true qos=true"
Removing the 'queue' in the above did the job. As I said, I'm not sure why that was causing the problem - only guess is that too many buffers were getting dropped along the way, which would make my GLES rendering code 'hold' the last image it got.
Chris Tapp
opensource at keylevel.com
www.keylevel.com
More information about the gstreamer-devel
mailing list