[gst-devel] Stuff that needs doing...
Owen Fraser-Green
owen at discobabe.net
Wed Mar 7 15:54:59 CET 2001
Hi,
> c) CORBA inside a container, used as an inter-process[or] link. This is
> pretty self-contained, but runs into the problem of finding the best way
> to proxy the Element interface. If one does a set_arg on an element, how
> does this data get passed to the real element living on the other side?
Could you explain this one a little? I'm not too sure I get what you mean.
> d) CORBA-ization of everything, in the form of making the public API for
> GStreamer fully wrapped with ORBit. This has the potential advantage that
> objects are more easily movable, but that depends on how ORBit implements
> it. More looking at the guts of ORBit is necessary to see if that solves
> enough problems. It requires significant work to re-write all the
> namespaces, and it could make direct use of the API rather hard.
I have a feeling that to export all the API as individual objects would
yield an enormous performance hit. Currently, memory buffers are shared
between elements and I don't think this would really be possible with
CORBA. For one thing, one of the objects could reside on a different
machine from the buffer it is trying to access so even if there were a
workaround this wouldn't be very efficient. I think if there is going to be
fine-grained componentisation then the whole pipeline model needs to be
reworked to make it work with CORBA so that arcs on the network are treated
as data _streams_ rather than buffers.
> 4) Event system
> This is the big one. I realized a week ago that there are more things
> that may be of interest to a plugin than just what comes in on its pads.
> Specifically, consider an mp3parse element and a file with an id3v2 tag.
> The tag is at the end of the file. In order for the parser to seek to the
> end, read the tag, and seek back, it has to know that it's got a new file.
> How does it do that? It could guess when it gets an offset 0 buffer, but
> what if you start out in the middle?
>
> The solution is some kind of event system that encompasses EOS, empty,
> flush, seeking, new-file notification, and everything else. This system
> would have to be carefully constructed to follow the dataflow left to
> right, and have the "right" behavoir when going right to left (seek).
> Some have suggested attaching these events to the buffers, but that
> introduces the overhead of checking each and every buffer at every point
> in the pipeline, which we've intentionally avoided so far.
>
> In general, a plugin would have a standard handler that would simply pass
> events through. Various plugins would either override that handler or use
> some other mechanism to register for interesting events, and handle them
> that way.
>
> We need to discuss the format of these events, which events we're going to
> start with, and how the plugins would interact with them. This is
> currently my second-highest priority, behind figuring out a first pass at
> the cross-process[or] stuff.
I think some kind of fine-grained event control mechanism needs to built
into the pipeline flow but I also agree that it's preferable to avoid
attaching the events themselves to the buffers.
As I see it there are a few problems which are making the event system hard
to design:
1) Seeking, requring back propagation of the events
2) Out-of-sequence requests originating downstream e.g. mpeg
3) Feedback loops e.g. echo filter leading to cyclic graphs.
4) Multiple input elements e.g. an EOS on one of a mixer's pads while
another input hasn't reached EOS
5) Simltaneous events e.g. flush arriving on one pad at exactly the same
time as an EOS. Or a seek coming the other way! Urghh!
Problems 3) and 4) make it difficult to deal with events outside of the
individual elements without performing transitive closures etc. - ouch! So
then I see there are three ways for the elements to become aware of events
in a way which is simple for them (which is more important than having a
basic event system which makes it difficult for the element writers):
1) The element polls something to see if any events have occured on every
buffer iteration.
2) The element checks the status on a flag which is set up by something
outside the element.
3) An event handling function within the element is magically called by
something outside when an event occurs.
Now, the problem with 1) is that this will add alot of overhead to the
buffer iterations and with 2) and 3) it is pratically impossible to
synchronise with the buffer iteration. However I think it's important not
to discount 2) and 3) altogether because at least they tell you that an
event is coming _soon_. The element could then go into mode 1) until the
event(s) is pinpointed. Also some might not really care exactly where the
event occured within the current buffer.
A system I've been thinking over is uses a two-pronged approach of moving
as much logic as possible out of the buffer loops to minimise the overhead
while retaining a simple check in the loop so as to provide fine-grained
control.
If we consider a simple loop from volume.c:
for (i=0;i<GST_BUFFER_SIZE(buf)/2;i++)
data[i] = (float)data[i] * volume->volume;
It would now become the following:
for (i=0;i<GST_BUFFER_SIZE(buf)/2;i++) {
if (i >= event_offset) process_event();
data[i] = (float)data[i] * volume->volume;
}
This shouldn't add much overhead in the loop. Now, the magic happens
outside in some event mechanism that understands which buffer the loop
currently has and is instructed by the connecting elements of the timestamp
of any events which occur. It holds a queue, for our volume example, of the
events which happen in the current buffer and a referance to event_offset.
&event_offset will be set to the offset calculated within the current
buffer of the first event in the queue (could this use TimeCache?). When
the event is processed, it's popped from the queue so that the next event
can be processed.
If there are no events in the queue then event_offset will be
GST_BUFFER_SIZE or some big value, and when an event is to happen ASAP with
no regard to synchronisation then event_offset will be 0 causing
process_event to happen immediately (e.g. seeking). The event scheduler can
also be instructed of which events to bother flagging to the chain.
Yes, it still adds bloat to the chain loop but I don't see how the event
can be scheduled without doing so except by some event-aware data retrieval
function e.g:
for (i=0;i<GST_BUFFER_SIZE(buf)/2;i++) {
data[i] = (float)get_data_and_check_events(i) * volume->volume;
}
but this will surely add a large performance penalty as the
get_data_and_check_events() is bounced on and off the call stack.
After re-reading David I. Lehn's post I realise that he is saying much the
same thing however this also solves the problem of synchronising events
within a stream.
Regards,
Owen
---------------------------------------------------------------------------
Owen Fraser-Green "Hard work never killed anyone,
owen at discobabe.net but why give it a chance?"
---------------------------------------------------------------------------
More information about the gstreamer-devel
mailing list