[gst-devel] Stuff that needs doing...

Erik Walthinsen omega at temple-baptist.com
Wed Mar 7 22:36:18 CET 2001


On Wed, 7 Mar 2001, Owen Fraser-Green wrote:

> > c) CORBA inside a container, used as an inter-process[or] link.  This is
> > pretty self-contained, but runs into the problem of finding the best way
> > to proxy the Element interface.  If one does a set_arg on an element, how
> > does this data get passed to the real element living on the other side?
>
> Could you explain this one a little? I'm not too sure I get what you mean.
The idea is that the Bin that does the work uses CORBA internally to make
the connection to it's "other half" on the other processor.  The RidgeRun
case for this is putting parts of the pipeline on the conjoined DSP.
There has to be some means of control communication (state change,
argument setting, signals) between the DSP and the main processor, and
CORBA seems a good way to do it.

> > d) CORBA-ization of everything, in the form of making the public API for
> > GStreamer fully wrapped with ORBit.  This has the potential advantage that
> > objects are more easily movable, but that depends on how ORBit implements
> > it.  More looking at the guts of ORBit is necessary to see if that solves
> > enough problems.  It requires significant work to re-write all the
> > namespaces, and it could make direct use of the API rather hard.
>
> I have a feeling that to export all the API as individual objects would
> yield an enormous performance hit. Currently, memory buffers are shared
> between elements and I don't think this would really be possible with
> CORBA. For one thing, one of the objects could reside on a different
> machine from the buffer it is trying to access so even if there were a
> workaround this wouldn't be very efficient. I think if there is going to be
> fine-grained componentisation then the whole pipeline model needs to be
> reworked to make it work with CORBA so that arcs on the network are treated
> as data _streams_ rather than buffers.
The data-flow wouldn't be via CORBA necessarily, just the control.  But I
agree, it's very hard to see how that would work cleanly, let alone with
any performance.

> I think some kind of fine-grained event control mechanism needs to built
> into the pipeline flow but I also agree that it's preferable to avoid
> attaching the events themselves to the buffers.
Right.  For reference, the reason for wanting to avoid that is that a
check would have to be performed for every buffer, at every single point
in the pipeline, which can get very expensive, not to mention really nasty
to code for all the plugins.

> As I see it there are a few problems which are making the event system hard
> to design:
> 1) Seeking, requring back propagation of the events
Shouldn't be too hard, given a simple enough event-passing mechanism.

> 2) Out-of-sequence requests originating downstream e.g. mpeg
Not sure what you mean by this.

> 3) Feedback loops e.g. echo filter leading to cyclic graphs.
I guess I'd say that if there's a possibility of a loop, the application
should be aware of it and take steps (attach callbacks to signals in the
right places) to stop it.

> 4) Multiple input elements e.g. an EOS on one of a mixer's pads while
> another input hasn't reached EOS
This would again be either the application's or the plugin's job.  A
GDAM-like mixer, for instance, would probably have callbacks attached to
the EOS signals on the sink pads of the mixer element, and would both deal
with the event and keep it from propagating further.

> 5) Simltaneous events e.g. flush arriving on one pad at exactly the same
> time as an EOS. Or a seek coming the other way! Urghh!
Shouldn't be a problem, because simultaneous events can't actually happen.
Remember that everything in a given process context is synchronous.  Only
at process boundaries (i.e. queues) do things get interesting, and even
then the code is running in two separate contexts synchronously.  The
queue and any other explicit boundary elements should have internal code
to serialize stuff.  Basically, only one event or buffer will be happening
at a given moment, for any given peering.  As long as that ordering is
maintained (unless something specifically reorders it), there should be no
problems.

> Problems 3) and 4) make it difficult to deal with events outside of the
> individual elements without performing transitive closures etc. - ouch! So
> then I see there are three ways for the elements to become aware of events
> in a way which is simple for them (which is more important than having a
> basic event system which makes it difficult for the element writers):
> 1) The element polls something to see if any events have occured on every
> buffer iteration.
> 2) The element checks the status on a flag which is set up by something
> outside the element.
> 3) An event handling function within the element is magically called by
> something outside when an event occurs.
Of those, 3) is the way everything else is done, so.... <g>

> Now, the problem with 1) is that this will add alot of overhead to the
> buffer iterations and with 2) and 3) it is pratically impossible to
> synchronise with the buffer iteration. However I think it's important not
> to discount 2) and 3) altogether because at least they tell you that an
> event is coming _soon_. The element could then go into mode 1) until the
> event(s) is pinpointed. Also some might not really care exactly where the
> event occured within the current buffer.
Remember though that everything is in the same process context, so nothing
happens at the same time.  If an event callback fires before the next
buffer comes in, that's exactly what has happened.

      Erik Walthinsen <omega at temple-baptist.com> - System Administrator
        __
       /  \                GStreamer - The only way to stream!
      |    | M E G A        ***** http://gstreamer.net/ *****
      _\  /_








More information about the gstreamer-devel mailing list