[gst-devel] 0.9 proposals
Martin Soto
soto at informatik.uni-kl.de
Thu Dec 2 14:50:03 CET 2004
Hi Wim,
On Thu, 2004-12-02 at 17:39 +0100, Wim Taymans wrote:
> As I said in a previous mail, Ideal scheduling would use both user and
> kernel space threads. I still have a bad taste in my mouth about the
> previous experience with cothreads but assuming that your statement is
> correct, I would map your intermediate proposal to my latest proposal:
>
> Cothreads are only useful when you need to schedule the push to pull
> based connections, all other connections are not on a thread or cothread
> boundary. In my proposal, the pull based pad would create a kernel
> thread (GstTask) to pull data from the other thread. Thinking about this
> some more it could just as easily create a new cothread to handle the
> data, where cothread switching would then happen on the push/pull
> boundaries of the pad.
This is exactly how the fair scheduler does it. It switches cothreads
when passing buffers from an element to the next. The actual switching
is implemented in gst_fair_scheduler_chain_handler and
gst_fair_scheduler_get_handler if you're curious.
By the way, the high level cothread API implemented in
faircothreads.[ch] may be a starting point for GstTask. Of course, we
would need to abstract the operations to be able to implement them as
kernel threads or user-space cothreads.
> It would also remove my problem d) from a previous mail and your
comment
> on why that should be :)
>
> I like it a lot... going to experiment with this.
I'm attaching my Pth based implementation of GStreamer's cothreads API.
I'd be glad if someone could integrate it properly. As I said, I haven't
because I don't know how to properly change the Automake stuff.
> > This is not really accurate. There is a central entity, namely, the
> > kernel.
[snip]
> What I meant is that data passing does not require any other code than
> calling a chain function with the buffer as opposed to a user space
> entity that mediates control.
I see. Still, a well written cothread based scheduler can also avoid
context switches in this case (not that fair is currently well written
in this respect :-)
> It is also important to note that locking
> is relatively cheap when there is no contention and, as I stated before,
> the contention happens when stopping/seeking the pipeline, the normal
> streaming behaviour has no contention at all. The real question is if
> grabbing a non-contented lock is more expensive that running code for a
> user space scheduler... I did a small micro benchmark on fakesrc !
> identity ! fakesink where a lock is taken in each element; it ran 10
> times faster that opt (which has no locks but needs to deref a few
> objects to be able to start the actual scheduling of one iteration).
I sent a Pth vs. gthread comparison in other mail. Locking and switching
threads has indeed a cost, it seems. If this is important when looking
at the whole cost of executing a pipeline, I don't know, though.
Cheers,
M. S.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pth-cothreads.c
Type: text/x-csrc
Size: 2513 bytes
Desc: not available
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20041202/50048ad2/attachment.c>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pth-cothreads.h
Type: text/x-chdr
Size: 3175 bytes
Desc: not available
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20041202/50048ad2/attachment.h>
More information about the gstreamer-devel
mailing list