[gst-devel] [livid-gatos] Re: [Dri-devel] Using drm (fwd)

Thomas Vander Stichele thomas at urgent.rug.ac.be
Wed Oct 10 02:33:06 CEST 2001


some gstreamer-related terminology at the bottom of this mail on gatos; I
don't know if some integration between the two would be possible, so I'm
sending this to you all FYI....

Thomas

<-*-                      -*->
Gotta keep moving on
lover you hide from me every time
<-*- thomas at apestaart.org -*->
URGent, the best radio on the Internet - 24/7 ! - http://urgent.rug.ac.be/

---------- Forwarded message ----------
Date: Tue, 9 Oct 2001 13:13:55 -0400 (EDT)
From: volodya at mindspring.com
To: Peter Surda <shurdeek at panorama.sth.ac.at>
Cc: livid-gatos at linuxvideo.org
Subject: [livid-gatos] Re: [Dri-devel] Using drm



On Tue, 9 Oct 2001, Peter Surda wrote:

> On Tue, Oct 09, 2001 at 09:55:50AM -0400, volodya at mindspring.com wrote:
> > take a look at xvideo.c (in xawtv source) - it is a small test app for Xv.
> ok will.
> 
> > > I think Xv is perfect for this.
> > Nope. It is 
> >    * not network transparent
> It would be if it you add a function that takes a drawable as a source instead
> of Xv port.
> 
> >    * does not support string values for the attributes
> Hmm what does this mean?

You can't have an attribute that provides you with CC data. Or one that
takes symbolic values like "Bob" or "Weave" or one that displayes a
comment. (so we could have XV_INTERLACE_ALGORITHM and
XV_INTERLACE_ALGORITHM_COMMENT etc..)

> 
> >    * does not cleanly support the cases "jsut capture", "just display",
> >          "both capture and display"
> Just add a new function. "display" is there, I'll make the 2 others ("capture
> to userspace" and "capture into userspace and display")
> 

And then add a new function for "capture and mpeg encode" and another one
for "capture and get VBI data", etc.. I don't think this is the right way
to do things.

> >    * does not support on-the-fly mpeg compression in the Xserver (for
> >      operation over network)
> I see your point. But isn't it more useful to transfer the data in userspace
> (example videolan server + videolan client)?
> 
> Basically I think even if the functions were defined as network transparent,
> the implementation would be very difficult, fault-prone and still most of the
> stuff needed is already available in userspace, outside X. I don't see much
> performance gain in doing it directly in X, it's still 1<->1 connection, with
> userspace you can do x<->y.

The point is that you can have an application running on remote computer
that just cat's mpeg file to X. This will work great with 100mbit network,
but if you pump decoded frames you'll run out of bandwidth.


> 


The way that I though it would be nice to implement it is to have 
  "buffer" objects - an abstraction for the place that holds data (i.e. drawable in plain X)
  "processing" objects - that take one or more "buffer" objects as arguments
                   among these:
                      "sink" objects - i.e. overlay
                      "source" object - i.e. capture device
                      "transport" objects move data from one buffer to another

In addition each object has a resource consumption value - a symbolic
expression that describes what it needs. For example, a capture object can
"consume" one capture engine. A "transport" object that does mpeg encoding
can consume cpu processing power and a network object will consume
bandwidth. The user app can query X about available object generators and
construct a chain of them that accomplishes what it needs. For example it
can create a "buffer" object that consumes video ram, a capture object
that connects to it, another buffer in plain RAM, a DMA "transport" object
that transfers data to it, another buffer in plain RAM, an mpeg encoding
object that encodes that data from the first RAM buffer to it, a buffer in
plain ram on another computer, a network "transport" object that transfers
data onto another computer, a buffer in video ram on that other computer
and an mpeg decoding object that decoded directly into video ram and an
overlay "sink" object that displays the data.

The advantage of this is that objects can be modules, so we could
implement easy once first and then, perhaps, get enough people attracted
so they help with more advanced ones.

                          Vladimir Dergachev


> >                             Vladimir Dergachev
> Bye,
> 
> Peter Surda (Shurdeek) <shurdeek at panorama.sth.ac.at>, ICQ 10236103, +436505122023
> 
> --
>                Dudes! May the Open Source be with you.
> 





More information about the gstreamer-devel mailing list