[gst-devel] Incremental scheduler
Erik Walthinsen
omega at temple-baptist.com
Mon Feb 19 22:30:21 CET 2001
OK, now I'm going to try to explain what's going on with the incremental
scheduling bit.
First of all, the reason this is being done: we are going to be dropping
the requirement that the pipeline be static in the READY state. This is
an invalid requirement these days, since there are things like dynamic
autoplugging, pipeline reconfiguration, and even hardware devices that
need to be queried.
This means that scheduling decisions can't be made during the NULL->READY
transition. We can't even do it during the READY->PLAYING transition. In
fact, with pipeline reconfiguration, there is no single time we can do it.
The pipeline may be in an arbitrary mixed state at any given time, and the
scheduler is just going to have to keep up. That means that every change
of interest is going to have to be deal with on the spot, hence the term
'incremental'.
OK, now, how did scheduling work before? First of all, there was the
manager hierarchy. Any Bin that had its own process would become the
manager of any child, grandchild, etc. This was a recursive process that
happened at plan generation time (NULL->READY). From there, "chains" of
elements were constructed, bounded by DECOUPLED elements. Each chain
contains the two bounding elements as well as everything in the middle.
The chain is found by simply recursing through the pad connections. Note
that each DECOUPLED element is owned by the chain attached to each pad.
This means that DECOUPLED elements can't be cothreaded, which means that
DECOUPLED elements must be chain/get-based.
Once the chains are built, they are walked over and scheduling decisions
are made, and the actual function pointers are filled in. From this point
the schedule is ready to go.
Now, the new system gets rid of the manager concept, but replaces it with
the very similar 'scheduler' field. A GstSchedule object is created by
any Bin that is a manager. Then any time an element is added to a Bin,
the sched pointer is copied to the new child. If the new child is itself
a Bin, the new sched pointer is inserted only if the that bin isn't a
manager itself (i.e. has its own scheduler). (FIXME: it currently
recurses into a MANAGER Bin even though that's probably a waste of time)
In addition to the setting of the sched pointer, each element is then
added to the schedule itself. This is accomplished via a macro that calls
the add_element function pointer in the GstSchedule. This callback is
used by the scheduler to maintain its list of elements that are managed by
the scheduler itslef. A similar callback is available for removing
elements from the scheduler. The logic for this in gstbin.c is a little
messy, and could use some cleanup, but it does work...
Now, the current code manages chains upon add/remove of an element, but
I'm probably going to change it so that chains are only updated on
enable/disable of elements. enable/disable will happen on state changes,
signaling the ability of the element to be actually run.
On the other hand, this might be the wrong time to do it. It might be
better to manage the chains at add/remove still, but find some other way
to enable/disable them. But for now, I'm going to set it up as above,
since it's easier, and we can change it transparently later anyway (the
joys of data hiding).
Now, the chains themselves are built incrementally within gstscheduler.c
When you add an element to a bin, it loops through the pads and checks to
see if there's a peer on them that's elligible to be put into the same
chain. This check consists of first seeing if the element is DECOUPLED
(always succeeds), then seeing if it's in the same schedule (succeeds).
Now, it could be argued that the pad's peer should *always* meet the
requirements, but consider the case where a pad connection is made before
the peer element has been added to the proper Bin. It's currently legal,
and remains legal. The element isn't in the same schedule yet, so we
can't do much to it.
When an element is added that needs to be joined to a chain,
gst_schedule_chain_elements is called. This function first searches for
the chain that holds each of the two elements given as arguments. What it
does depends on whether these chains exist or not. If neither element is
associated with a chain, a chain is created and the two elements are
added. If only one has a chain, the other is added to that chain. If
both have chains, the second element's chain is cannibalized and added to
the first chain.
If an element is removed, it is currently simple removed from the chain.
FIXME: This is potentially a problem, since it doesn't take into account
the fact that removing an element is likely to split a chain.
The other events that significantly impacts the chains are pad connection
and disconnection. On connection, we can pretty simply check to see if
the elements can be in the same chain, and join them.
Disconnect is a bit harder. Currently we destroy the chain that the
elements are in, then go walking through the connection graph from the
first element, constructing a new chain. If by the end of this recursion,
the second element isn't already in the new chain (by another route
through the graph), we recurse through that and create a second chain.
Next, I'll try to describe how eos/empty/flush fits into this, and how one
would actually go about modifying a live pipeline.
Erik Walthinsen <omega at temple-baptist.com> - System Administrator
__
/ \ GStreamer - The only way to stream!
| | M E G A ***** http://gstreamer.net/ *****
_\ /_
More information about the gstreamer-devel
mailing list