[gst-devel] One Stream API to Rule them all - Could GStreamer be it ?

Maxim Mueller maxim_m at gmx.net
Tue Dec 2 14:52:02 CET 2008


I am writing to this list because it is the closest fit i am aware of.
please be so kind to direct me to the right forums, MLs, etc. to expose this
idea to the right people.

when thinking about the diverse, uncoordinated and code-duplicating mess
that are all the different encoding/compression/processing libraries and
programs out there i found the need to for a solution that brings order to
the chaos once and for all.
and possibly extends the way we think about operating system services.

it seems the following use cases have enough in common to be handled through
a unified, system-wide api. please correct me if i am wrong.

(ordered by decreasing existing use of gstreamer)
1) a media file should be viewed on the screen

2)
a media stream should be captured to a file

3)
a 7-zip/LZMA archive should be created from a bunch of file handles

4)
a file system needs to handle transparent compression and/or encryption of
inodes/whatever low level objects it uses

5)
a hash needs to be generated from a file stored on a different server,
accessed through name-your-favourite-esoteric-protocol-here

6)
SETI-at-home needs to crunch some data using its FastFourierTransform
Algorithm to find our future alien overlords and the user has a custom DSP
specifically designed to speed up this process. using that would be easy -
if he could just install the right FFT plugin

7)
a user of Photoshop, nay Gimp/Krita would like to utilize the SVG filter the
Webkit implementation uses and dynamically stream that picture as a
change-as-you-work-stream to a guy on the other end of the world who does
some video processing with it

8) Complex:
high level objects work the same way - downloading and installing an rpm:

since many file mirrors(the respectable anyway) publish a hash alongside the
file a la

SomeFile.rpm
SomeFile.MD5

handing the http or ftp location and the "install" command to the respective
http source and rpm sinks(or both to an handle-archive--bin ?), the
autoplugger could even go so far as to auto-insert potentially installed
-gnutella/bittorrent/whatever,
-hashing,
-SSL/TLS,
-security handling,
-decompression,
-caching
-and optional ask-the-user-for-input plugins into the pipeline to bring
transparent P2P swarming to even the simplest of downloading applications.
(the http source could dynamically create a a-hash-is-available-pad to
facilitate that or the bin instantiates another connection. failing that it
instantiates a plugin listening for user/system-input hashes. probably such
a plugin should be preloaded and maybe create a dbus service for
runtime/pre-run configuration of pipelines)

-----------------------------------------------

all these things involve some from of somewhat more complex algorithm being
run on a buffer of data, be that either a static one of known type, or
changing over time.
automatic threading, scheduling, detection'plugging services would benefit
the programming ecosystem greatly, imho

-----------------------------------------------

benefits of one globally used stream processing api:
- greatly reduced development effort, rate of bugs and maintenance

- reduced learning curve when adding functionality to software, furthermore
a more structured way of thinking about processing data is also bound to
produce cleaner code

- systems theory: standardizing components interfaces  tends to lead to the
discovery of novel ways to combine those, leading to innovative applications

- improve the functionality of existing applications simply installing a new
plugin(to a certain degree)

- users decide which implementation they want to use, all in all, it's
his/her decision to make.
the ability to stop having to install multitudes of http/etc protocol,
VirtualFileSystems(KIO-slaves,GIO, PHP5's stream API ...),
encoding(OpenSSL/GnuTLS/PGP ...), compression etc implementations just
because the respective application developers had their favourites, would be
a great step forward in the fight for truly free software and putting the
user back in control of his own system.

- by moving up one abstraction layer, just one wrapper to support other
programming languages is required-in contrast to per library wrappers
.
this last point essentially realizes most of the promise of .NET but in a
somewhat less invasive fashion.
many of the different installed (class) library implementations(from .Net to
Z Machine) of similar functions could be unified into one, mostly user
chosen set of plugins

---------------------------------------

speaking from a system architect's view,

i would like to move most if not all of the data<->data processing of the
Controller part of MVC into a high(er) level, unified and generic interface.

only the [knowledge<->]information<->data parts (most of the "business
logic") should remain in the main application, as they are much more task
specific

this should create a comprehensive view of the complex processing algorithms
available(a part of the system services view, that could be advertised over
a network vie e.g. ZeroConf) on a system and the partial automation should
make customized deployment and management a lot better.

the plugin->library separation could still be kept in selected cases to
allow for ultra-low-level operation, although for this class of algorithms
it might be overkill.

-----------------------------------------

Q: where do we stand with GStreamer in terms of being able to be the root of
that api ?
- is it sufficiently generic ?
- is it simple enough ?
- is the core sufficiently light-weight and modular ?
- are plugins sufficiently simple  to write and do they only add a
negligible overhead by default ?
- is the type-system powerful enough or can it be made to be ?

Q:Should gstreamer be split into a generic stream processing/plugin mgmt
layer and a multimedia handling api layer for this ?
everybody should use the lower layer, and be competing on the higher API
level(xine,mplayer,directshow ... not sure if phonon is in the same level)

Q: should the core go into the linux kernel to be universally available and
be able to serve the kernel's needs(hashing, TCP/IP layer, crypto, VFS
etc...) ?
can it even go there with the plugins staying in userland ?

Q: are the identification algorithms used for autoplugging also plugins so
that they can be arranged into a pipeline that produces typeinfo?

it should be possible to load the data recognition/autoplugging parts
separately since low level users don't want the extra overhead(e.g the CRC
on an inode in a linux filesystem)

also, some users might prefer a unique autoplugging strategy on theirs
systems

Q: what has to be done to make this vision a reality ?
who to talk to to hammer out a strategy ?
what skills/efforts are needed to implement it ?
where to evangelize it ?

Q: since much work on gstreamer is sponsored by a specific company, what is
their take on this ?

Q: and most important of all: does all that even make sense ?

thanks for reading and looking forward to your feedback and pointers.


best regards

MaxxCorp


-- 
MaxxCorp Knowledge Management Solutions GmbH
Berlin - GuangZhou
Maxim Mueller - Founder and Managing Partner
GZ Cell:13416104615
EMail: maxim_m at gmx.net
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/gstreamer-devel/attachments/20081202/5274e757/attachment.htm>


More information about the gstreamer-devel mailing list