[gst-devel] INCSCHED1 explanation

Erik Walthinsen omega at temple-baptist.com
Tue May 1 23:45:23 CEST 2001


I'm gonna try to explain INCSCHED1 now, since it's approaching final merge
time, and people need to understand what it's doing before they try to
move their apps over to the new pipeline manipulation style.

SCHEDULER CALLBACKS
===================

First of all, the scheduler is entirely callback driven now.  Various
events which require the schedule to be updated or otherwise dealt with
are now macros that call into the vtable in GstSchedule.  These events
are:

add_element
remove_element
enable_element
disable_element
lock_element
unlock_element
pad_connect
pad_disconnect
iterate

These callbacks are access via the sched pointer of an element or pad.
These pointers are used to associate a given element or pad with a
scheduler.


SETTING THE SCHED POINTER
=========================

When an element is first created, its sched pointer is NULL.  The Pipeline
and Thread fix this right away by creating a schedule themselves, since
they are MANAGING elements that are presumed to have some control over
their contexts.

When an element is added to a container with gst_bin_add(), a call is made
to gst_bin_set_element_sched().  This function handles the possibility of
adding another Bin to the bin.  If the element is a Bin, and it isn't
already a MANAGER itself, it recursively calls itself on each of the
children.  If the element added is just a simple Element, it calls
GST_SCHEDULE_ADD_ELEMENT to finally add the element to the schedule.

This is specifically designed so that one can construct a complete
sub-pipeline in a Bin, then add it in one call to a MANAGER element and
have it do the right thing.

When an element is removed from a Bin, a similar process occurs with
gst_bin_unset_element_sched(), which recursively removes the elements from
their scheduler, stopping at MANAGER boundaries.


SCHEDULER INTERNALS
===================

Internally, the scheduler uses the concept of 'chains' of elements.  This
name is a poor choice, since it is overloaded with 'chained elements'.
Suggests are welcome, 'block' is the best choice I can think of right now
and that kind of sucks.

gst_schedule_add_element() is the default implementation of
GST_SCHEDULE_ADD_ELEMENT.  It begins by removing the element explicitly
from whatever schedule it's in, though this should never happen.  It sets
the sched pointer, then checks to see if the element is a Bin, and bails
if it is, because Bins are irrelevant from a scheduling point of view.
The element is then added to the scheduler's list of elements to keep
track of it.

Now, the element must be added to a chain.  For various reasons, a new
element is *always* given its own chain from the very beginning.  This is
accomplished by creating a new chain with gst_schedule_chain_new(sched),
and adding the element to the chain with gst_schedule_chain_add_element().
The new chain will have the element in its disabled list when it's first
added.

Then it has to figure out if the element is actually already connected to
something relevant.  For each of the pads it sets the pad's scheduler
(since it's a convenient time to do it) and determines if the pad's peer
is a viable candidate for merging scheduling chains.

The decision of whether an element is viable or not used to be complex in
earlier iterations of INCSCHED1, and had gotten much simpler over time,
but the rules and reasons must be carefully understood, for correctness:

First of all, we make the decision that an element can only exist in a
scheduling chain if it also belongs to that chain's parent scheduler.
This means that a queue in one thread and its peer in another will not be
in the same chain.  This is desirable, because a) the queue is DECOUPLED,
and therefore does not carry the usual single-context semantics, and b) it
keeps the single-parent concept even in scheduling, in case the scheduler
decides it's going to do something smart.  If it decides to give the
element a context anyway, only one scheduler can give one.

This boils down to the fact that the two pads being looked at (one of the
pads of the element recently added and its peer's parent element) must
have the same scheduler in order to be joined.  This seems simple, but
consider the case where an element is connected to another that hasn't
been added to the same bin yet.  It won't pass this test, but when it's
added to the Bin later, the same test will pass from the other side, which
has the same effect.

The other half of the issue, and the whole reason for scheduling chains,
is that DECOUPLED elements are supposed to be boundaries of scheduling
chains.  This has recently be UN-implemented, because we don't use it at
the moment, and it could be argued that we never will with properly
designed pipelines, but I don't agree with that.  We will need to figure
out the best way to deal with them, for instance by guaranteeing that they
only get added to a chain on their right-hand side.  The problem with that
is:  what if they have multiple src pads?  First-come, first-served?
Comments welcome.

When it is decided that the two elements should indeed be in the same
chain, it's gst_schedule_chain_elements()'s job to pull this off.  It
first finds the chains for each of the two elements, by searching the list
of chains, and search each chain's list of both disabled and enabled
elements.  If the resulting chain pointers match, the routine exits, since
the job is already done.

There are three cases implemented in gst_schedule_chain_elements(), but I
suspect that some of them will never run.  The first, and most likely to
never be used, is the case where neither element is associated with a
chain.  In this case it creates a new chain and adds the two elements to
it.

If one element has a chain, and the other doesn't, the latter is simple
added to the former's chain.  If they both have chains, the contents of
one of the element's chains is added to the other, and that chain is then
removed.


PAD CONNECT/DISCONNECT
======================

When pads are connected, gst_schedule_pad_connect() is called.  This
function performs basically the same function as the loop in
gst_schedule_add_element, on the single pair of elements that are being
connected.

One potential issue is that typically only the source element's scheduler
is notified of a pad connection.  If they are in the same scheduler
anyway, nothing else needs to be done.  If they aren't, there's the
potential problem that the sink-side scheduler is never informed, and thus
can't generate the schedule.  This may have to be rethought so that
GST_SCHEDULE_PAD_CONNECT is called twice if the src and sink's schedulers
don't match.  This will also require some more thought as to when the
schedule is actually generated for a chain.

When pads are disconnected, some more interesting stuff occurs.  First,
the chain the elements belong to is simply destroyed.  Then a new chain is
created for the src element.  gst_schedule_chain_recursive_add() is used
to construct this new chain.  Lastly, if the sink element hasn't been
already added to the first chain, repeat the process with a new chain for
that element.

gst_schedule_chain_recursive_add() is similar to gst_schedule_add_element,
except that it recurses.  For each element that's found, it checks all
peer elements to see if they're candidates, and if so, recurses into those
to check for more elements.  This will find a complete scheduling chain.


ENABLING AND DISABLING
======================

Once the chains are constructed, the scheduler is drive by the state
system.  Elements are enabled and disabled in gst_element_change_state,
which is the GstElement's class implementation.  Upon state transition to
PLAYING, it calls GST_SCHEDULE_ENABLE_ELEMENT, and disables on transition
away from PLAYING.

The enable and disable functions provided by the scheduler each search for
that element's chain, and call gst_schedule_chain_[en|dis]able_element().
These functions simply move the element between the disabled and enabled
lists, and call the gst_schedule_cothreaded_chain() function to
recalculate the schedule for that chain.


SCHEDULE GENERATION
===================

Currently only one type of schedule is used, the cothreaded variety.  This
is because it's the simplest to generate, and thus reduces the number of
things to go wrong when everything else is changing at the same time.
Future work will be to create optimized scheduling code to both do final
plan generation more incrementally, and utilize the various tricks to
reduce overhead of cothread switches and such.

The generator starts by initializing a cothread context if one doesn't
exist yet.  It then loops through all enabled elements in the chain,
deciding what to do to them.

The first thing is to determine what kind of wrapper function is necessary
to run the element.  If the element is loop-based, it uses
gst_bin_loopfunc_wrapper.  If it's a source, it gets a source wrapper,
else it gets a chain wrapper.  DECOUPLED elements don't get wrappers

Then, for each pad in the element, check the type of peer.  If the peer is
DECOUPLED, we need to set up the push or pull function proxies as either
the chain or get functions of the peer.  Otherwise we set up the
cothreaded proxy functions.

Finally, if the element is to be cothreaded, we construct a cothread for
it if one doesn't already exist, and set the correct wrapper function as
the entry point for the cothread.


ITERATION
=========

When it comes time to actually run the pipeline, the schedule's iterate()
function does the job.  Right now it walks through each of the chains and
finds an entry point to jump into.  The entry point is the first element
on the list of enabled elements in the chain that is *not* DECOUPLED.  It
simply switches into this element after setting the COTHREAD_STOPPING bit.

Each element is somehow responsible for checking this COTHREAD_STOPPING
bit and only running through one iteration of itself before switching back
to cothread 0, which is the cothread iterate() runs in.  This enables the
iterate() routine to actually do only one iteration, rather than spinning
forever.  This forms the basis for state change later on.

      Erik Walthinsen <omega at temple-baptist.com> - System Administrator
        __
       /  \                GStreamer - The only way to stream!
      |    | M E G A        ***** http://gstreamer.net/ *****
      _\  /_





























More information about the gstreamer-devel mailing list