[gst-devel] 0.9 proposals

Wim Taymans wim at fluendo.com
Thu Dec 2 08:46:02 CET 2004


On Thu, 2004-12-02 at 15:39 +0100, Martin Soto wrote:
Hi Martin,

As I said in a previous mail, Ideal scheduling would use both user and
kernel space threads. I still have a bad taste in my mouth about the
previous experience with cothreads but assuming that your statement is
correct, I would map your intermediate proposal to my latest proposal:

Cothreads are only useful when you need to schedule the push to pull
based connections, all other connections are not on a thread or cothread
boundary. In my proposal, the pull based pad would create a kernel
thread (GstTask) to pull data from the other thread. Thinking about this
some more it could just as easily create a new cothread to handle the
data, where cothread switching would then happen on the push/pull
boundaries of the pad. 

It would also remove my problem d) from a previous mail and your comment
on why that should be :) 

I like it a lot... going to experiment with this.

> > f) Data passing is happening freely without interaction from a
> central
> > entity. Lock contention goes through the kernel.
> 
> This is not really accurate. There is a central entity, namely, the
> kernel. That the kernel offers such a good abstraction that you have
> the
> impression that there's no kernel, shouldn't deceive you. Every time
> two
> threads synchronize you have to switch back to the kernel, which in
> turn
> gives control to the next thread. Those are operations your processor
> will be performing anyway. And the fact that preemption points are all
> around is not necessarily an advantage. It implies you have to do
> locking, and locking isn't always cheap.
> 
What I meant is that data passing does not require any other code than
calling a chain function with the buffer as opposed to a user space
entity that mediates control. It is also important to note that locking
is relatively cheap when there is no contention and, as I stated before,
the contention happens when stopping/seeking the pipeline, the normal
streaming behaviour has no contention at all. The real question is if
grabbing a non-contented lock is more expensive that running code for a
user space scheduler... I did a small micro benchmark on fakesrc !
identity ! fakesink where a lock is taken in each element; it ran 10
times faster that opt (which has no locks but needs to deref a few
objects to be able to start the actual scheduling of one iteration).


Wim

-- 
Wim Taymans <wim at fluendo.com>





More information about the gstreamer-devel mailing list