[gst-devel] 0.9 proposals
Thomas Vander Stichele
thomas at apestaart.org
Wed Dec 1 09:36:08 CET 2004
Hi,
> > How do you make a non-blocking API out of a blocking API (say, a lib
> > you're wrapping in an element) without using threads ?
> >
> I'd of course allow elements to spawn their own threads and care for them
> if they need to.
Ok, thanks for the clarification.
> In reality I'd just consider libs that need their own
> threads broken and work on a replacement.
What do you mean by "need their own threads" ? Is your plan also to
rewrite every element that wraps a blocking library by using two threads
and having the element manage this own internal thread ?
> > How would you write elements that are more performant or easier to write
> > using threads, like the v4lsrc elements or multifdsink ?
> >
> v4lsrc is dead easy to write in a non-blocking way.
Yes, under the premise that it spawns its own thread like it's doing
now.
> > How do you make up for the performance loss of choosing a non-optimal
> > read/write/select model ?
> >
> Pardon me? I don't see a performance loss and a non-optimal select model.
> In fact the biggest performance loss in my 0.9 code is the usage of
> GMainLoop. I think this might be due to it's heavy use of locking, but I
> haven't investigated it further.
I'm merely wondering about the traditional trade-off between blocking
and non-blocking. Since I don't really know what you mean by "non-
blocking" (and I'm assuming you mean something else than the traditional
meaning), feel free to explain that a little. I'm curious how you get
good properties of non-blocking models without getting the bad ones.
> > How do you still achieve low-latency if you have to actively loop to
> > check for data processing ? How do you decide on the latency-vs-
> > performance tradeoff ?
> >
> I'm not sure why you think this is so much better with threads. Threads
> are just an uncontrollable and suboptimal way to have the decision what to
> do next done by someone else and freeing you of the burden to decide for
> yourself by making the kernel decide what to schedule next instead of
> doing it yourself.
> I've always wondered why that would be preferrable.
Because the kernel is probably better at this scheduling than what we
do. I think it's better with threads because threads can decide
themselves when they lock/sleep/..., they can use conditionals to signal
each other, and so on. These are all immediate. Any other scheduling
mechanism we could implement that doesn't use threads can never hope to
achieve this low latency, and always bounded by the time it takes for
the currently running thread of execution to pass back control to your
scheduler.
But maybe that's a trade-off you're willing to make anyway, I don't know
what your view is on that.
> > I think nobody disagrees that the 0.7 cycle was a disastrous mess.
> >
> Actually, I do disagree. While it certainly could be improved, it was a
> lot better than before.
Well, I don't know. Lots of big things were broken indefinately ...
> But you're right that it would have been nice if people had actually
> completed the stuff they wanted to do on time.
... causing lots of people to be completely blocked on delivering on
their stuff because there was no way to test what they were working on.
> > But it should be possible to have *some* measure of
> > stability and workingness throughout the development.
> >
> I think you'll have a hard time having this. To measure stability and
> workingness, you need to do tests. GStreamer's testing framework is as abd
> as during 0.7
Yes, so this symptom needs to be addressed. The choice is between "do
we throw ourselves blindly in a messy development cycle with months of
brokenness again" or "do we figure out a good way to make sure that
we're actually progressing in some sort of direction" ?
> and there's still the same crux with getting app developers
> to switch their app and test it on an unstable branch that is quickly
> evolving.
We could do better in providing small applications testing specific
functionality (for example, "seek") and guarantee that they work first
before inflicting this kind of pain on external app developers. The
fact that we don't do this is one of the reasons app developers can't be
enticed to switch and test against a devel branch of ours - we just
don't treat them well.
> So I'm not sure which measurements you think of here.
Simple tests we all agree on that should work; benchmarks; small sample
applications; whether or not the API and design of internals matches the
use cases we want to solve; ... There's lots of stuff we could do to
make sure we're not just blindly following our instincts in the hopes
that we end up with something that works by magic, hard work, or
statistic coincidence.
Thomas
Dave/Dina : future TV today ! - http://www.davedina.org/
<-*- thomas (dot) apestaart (dot) org -*->
Can we, like, have a dude conversation ?
I'm begging here !
<-*- thomas (at) apestaart (dot) org -*->
URGent, best radio on the net - 24/7 ! - http://urgent.fm/
More information about the gstreamer-devel
mailing list