[gst-devel] Daily IRC logs

wim.taymans at chello.be wim.taymans at chello.be
Sun May 13 06:28:06 CEST 2001


[07:53] walken (foobar at c1583255-a.smateo1.sfba.home.com) joined #gstreamer.
[07:54] <walken> hi
[09:06] Nick change: taaz -> taazzzz
[09:21] steveb (steveb at node1ee04.a2000.nl) joined #gstreamer.
[09:36] <steveb> RIP Eazel?!
[09:36] <omega> so it seems
[09:38] <steveb> inevitable i suppose
[09:39] <omega> now I wonder about Ximian
[09:39] <omega> and I'm glad that Eazel recently decided to stop doing internal-only development
[09:40] <steveb> Ximian might need to repriortise their projects
[09:40] <omega> quite, and take a doze of reality before they get smacked too
[09:43] <omega> er, dose
[09:43] <steveb> yup
[09:46] Action: omega reads Subject: [linux-audio-dev] An alternative API proposal for LAAGA
[09:48] <steveb> if LAAGA ever comes to anything, it might actually be quite good for gstreamer's real-time audio hardware interface?
[09:49] <omega> possibly
[09:49] <omega> but I think that the LAAGA scope overlaps way too much to make it cooperate side-by-side
[09:49] <omega> it'll be good in the sense of ideas, maybe code, but not much else I think
[09:51] <steveb> do you think gstreamer should be more visible on that list as an alternative
[09:51] <steveb> ?
[09:52] <omega> yes, that's why I'm reading and hope to respond
[09:52] <omega> I tried to get taaz (I think) to comment, but generally anyone both here and there is, um, suggested ;-), to comment as appropriate <g>
[09:53] <omega> most useful might be the "solved that" responses ;-)
[10:04] <omega> you read[ing] the same message?
[10:04] <walken> laaga ?
[10:04] <walken> whats that
[10:04] <omega> the acronym of the week from linux-audio-dev
[10:04] <omega> I dunno what it stands for even
[10:05] <walken> linux audio advanced g???? architecture ?????
[10:05] <omega> something like that probably
[10:05] <omega> Linux Audio Application Glue API
[10:05] <omega> http://www.eca.cx/laaga/
[10:06] <walken> gpl :)
[10:07] <steveb> was eating breaky... will read it now
[10:08] <omega> bleagh.  they assume a GUI always
[10:08] <omega> info to the plugin - for example info may be passed to help the plugin
[10:08] <omega> get its communication link setup with its GUI.  This routine should
[10:08] Action: walken needs to print the AC3 stuff again
[10:08] <omega> are there full specs available?
[10:08] <walken> yup
[10:08] Action: omega remembers having them at some point
[10:08] <walken> how do you think aaron got it working :)
[10:08] <omega> well... <g>
[10:09] <walken> http://www.dolby.com/tech/
[10:09] <walken> then go to a/52 and errata sheet
[10:09] <omega> right, a/52
[10:12] <omega> btw steveb, walken: whereas ac3 and friends push 5.1, etc. surround, I'm much more intrigued by ambisonics
[10:13] <walken> whats that :)
[10:13] <omega> http://www.ambisonic.net/
[10:13] <omega> surround sound done mathetmatically
[10:14] <omega> the front pic shows a mic that can produce WXYZ surround
[10:17] <steveb> patent free?
[10:17] <omega> as far as anyone can tell, yes
[10:18] <omega> the history indicated that it's been dumped so many times in the past for various reasons that it's pretty free & clear
[10:18] <omega> mathematically it's the obvious way to do it, but it hasn't been practical until recently
[10:19] <omega> at some point I'm gonna set up a machine with two SBLive! cards and a cube of speakers in my room just to play with this stuff
[10:19] <steveb> 5.1 = 1 dialouge + 2 front + 2 back & subbass?
[10:20] <omega> L/C/R, sR/sL, sub
[10:20] <omega> 7.1 adds cR/cL
[10:20] <omega> there are systems as high as 23.2 I think (eek!)
[10:21] <omega> very specialized, used for theme parks where advanced localization is key
[10:21] <omega> but keep in mind that those are *speaker* channels, whereas ambisonic channels are completely in the theoretical domain, and can be converted to *any* set of any number of actual speakers
[10:21] Action: steveb drools
[10:21] <steveb> yep
[10:22] <steveb> so you want to write a ambisonics sink which renders dvd audio?
[10:22] <omega> higher-order ambisonics can use up to 9 channels
[10:22] <omega> erm?
[10:22] <omega> completely incompatible concepts
[10:22] <steveb> oh
[10:22] <omega> dvd is 5.1, which is speaker-based
[10:22] <omega> if you're *producing* sound, do it in ambisonics WXYZ at least
[10:22] <omega> then you can mix *down* to 5.1 later
[10:23] <steveb> i meant for a consumer setup with DIY ambisonics
[10:23] <omega> you could convert 5.1 to WXYZ by individually rendering each 'speaker' in WXYZ space and mixing the result
[10:23] <walken> hmmmm
[10:24] <omega> then you simply end up with these speakers 'localized' in your speaker-space, whatever your speaker setup is
[10:24] <walken> can you explain me what its all about ? I see no simple explanation on their site
[10:24] <omega> ok, I'll give it a try
[10:24] <omega> start with mono sound, that's the W channel
[10:24] <omega> add the X channel, which is a left-right *differential* channel, you can construct L, R
[10:24] <omega> aka mid-side
[10:25] <omega> add the Y channel, you get front-back differentiation
[10:25] <omega> Z gives up-down
[10:25] <omega> removing any higher channel doesn't remove audio, just the localization in that dimension
[10:25] <omega> so you can take WXYZ and trim it to just W and listen in mono
[10:25] <omega> higher-order systems go up to 9 dimensions
[10:26] <walken> hmmm
[10:26] <omega> from these channels you can construct actual speaker outputs, based on the speaker's physical location in the *actual* listening space
[10:26] <steveb> like stereo FM in #D
[10:26] <omega> so if you move your speaker, tell your computer and the audio still retains the bulk of its localization
[10:26] <steveb> 3d rather
[10:26] <omega> hrm, /me doesn't know how FM stereo works, but it's probably mid-side
[10:27] <steveb> mono + X channel in a subband
[10:27] <omega> ok, mid-side
[10:27] <steveb> yep
[10:28] <walken> yeah
[10:28] <steveb> so is the Y channel the diff of W or W&X?
[10:28] <omega> there are LADPSA filters to encode a point-source into 'bf' and 'fmh' (4- and 9-channel ambisonics), as well as decode bf and fmh into various speaker configs
[10:28] <omega> steveb: both, afaik
[10:28] <steveb> ah
[10:30] <omega> http://www.york.ac.uk/inst/mustech/3d_audio/ambis2.htm
[10:31] <walken> hmmmmmmm
[10:31] <omega> pushing it further, there are issues with phase, spectral power, wavefronts, etc., but the simple stuff is sufficient for most things afaik
[10:31] <omega> there are even better explanations if you search, which I did a few months ago
[10:31] <walken> how do you record ambisonics stuff ?
[10:32] <walken> can you convert it from multichannel or do you need weird microphones ?
[10:32] <omega> either
[10:32] <omega> from instrument channels, you assign a 3space location and you can 'place' that channel in a location in the soundfield
[10:32] <omega> then mix the different channels, afaik summing them
[10:33] <omega> or, you can use something like the SoundField Research mic shown on that page
[10:33] <omega> that has 4 capsules tightly coupled in a certain pattern, which is transformed (simple algorithm) to WXYZ
[10:34] <omega> but those mics are >$5000
[10:34] <omega> you *rent* them <G>
[10:35] <walken> hehe
[10:36] <steveb> what about those mics which are mounted in the 'ears' of a model human head
[10:36] <omega> those are for hrtf studies
[10:36] <omega> you can make a great stereo recording from that
[10:36] <omega> but not much else
[10:37] <omega> great.  moz is ignoring me
[10:37] <omega> moz and frames suck compared to ns, unfortunately
[10:37] <omega> http://www.soundfield.com/prod01.htm
[10:38] <omega> if I get a chance one of these days, I'm gonna construct something like that with 4 at-853's to experiment
[10:39] Action: omega reads this laaga post and wonders: where's the private storage for everything?
[10:39] <omega>   int okay= set_name(char *name);
[10:39] <omega> set_name() of what??
[10:48] Nick change: steveb -> steveb_not_here
[11:13] walken (foobar at c1583255-a.smateo1.sfba.home.com) left irc: l8r
[11:15] Nick change: wtay-zZz -> wtay
[11:15] <wtay> yawn
[11:15] <omega> bleagh.  are you on l-a-d?
[11:16] <omega> I forwarded it to you just now anyway
[11:16] <wtay> nope
[11:17] <omega> we need to get the alsasrc and alsasink working, as well as ladspa.  then those interested in audio should focus on building a system, I think
[11:18] <omega> leave dynparms and such till later
[11:19] <omega> part of that is preparing a document that details the finer requirements of audio plugins that fit optimally with this infrastructure
[11:20] <omega> this will specify for instance, either float or int32, nothing else
[11:20] <omega> that limits the amount of capsnego and multiple specialized implementations necessary
[11:20] <wtay> hmm
[11:22] Action: wtay tries to understand the subject
[11:22] Action: omega needs to go to sleep
[11:23] <omega> have fun at work <g>
[11:23] <omega> er, wait
[11:23] <omega> wrong day.
[11:23] Action: omega runs away
[11:23] omega (omega at omegacs.net) left irc: killall -9 omega
[11:24] Nick change: maYam_sleep -> maYam_busy
[11:37] maYam_busy (mayam at cable-195-162-214-58.upc.chello.be) left irc: Read error to maYam_busy[cable-195-162-214-58.upc.chello.be]: EOF from client
[11:46] Nick change: steveb_not_here -> steveb
[12:04] <wtay> 'morning
[12:06] Nick change: ajbusy -> ajmitch
[12:06] <ajmitch> morning
[12:06] <ajmitch> steveb: highlanders beat crusaders ;)
[12:09] Action: steveb whoops
[12:23] <steveb> damn, weather is too nice to hack
[12:23] Nick change: steveb -> steveb_not_here
[12:29] Nick change: ajmitch -> ajzzzz
[14:20] Nick change: steveb_not_here -> steveb
[15:08] matth-waking (matth at qwest.dsplinux.net) left irc: Ping timeout for matth-waking[qwest.dsplinux.net]
[16:09] taazzzz (dlehn at 66.37.66.32) left irc: Ping timeout for taazzzz[66.37.66.32]
[16:09] taazzzz (dlehn at 66.37.66.32) joined #gstreamer.
[16:14] taazzzz (dlehn at 66.37.66.32) left irc: Ping timeout for taazzzz[66.37.66.32]
[16:17] taazzzz (dlehn at 66.37.66.32) joined #gstreamer.
[16:31] steveb (steveb at node1ee04.a2000.nl) left irc: Ping timeout for steveb[node1ee04.a2000.nl]
[16:34] steveb (steveb at node1ee04.a2000.nl) joined #gstreamer.
[16:38] Ow3n (owen at ti34a80-0852.bb.online.no) joined #gstreamer.
[16:38] <Ow3n> yo
[16:39] <wtay> hi
[16:39] <Ow3n> How's things?
[16:39] <wtay> working on refcounting
[16:40] <steveb> hi
[16:40] <Ow3n> hi.
[16:40] <wtay> yo
[16:41] <Ow3n> steveb: Can we talk dparams?
[16:41] <steveb> sure :)
[16:42] <Ow3n> :)
[16:43] <Ow3n> I still feel that it's a better philosophy to pass control data around as a simple data stream.
[16:43] <Ow3n> And then implement things like interpolators as elements.
[16:45] <steveb> i think the fundamental advantage of my approach is that you can get arbitrary sample accuracy and non-constant control rates without interpolating the control data to the audio sample rate
[16:46] <Ow3n> That I agree with but surely that's only meaningfull in non-RT situations.
[16:46] <steveb> with your approach sooner or later you have to convert the control data to a constant rate - that rate will either loose some sample accuracy (by being less than the sample rate) or it will be wasteful of memory (by going at the full sample rate)
[16:48] <steveb> as far as RT goes - my #1 priority is to optimise for the real time case where the control rate is fixed and matches buffer bounderies
[16:48] <steveb> so the element API will be changing a bit to reflect that
[16:49] Action: Ow3n is on the phone
[16:50] <Ow3n> Sorry about that.
[16:51] <Ow3n> I think in most cases elements inputs will be set to a constant value.
[16:51] <Ow3n> The next most common case...
[16:51] <Ow3n> will be things like level where sample accuracy doesn't matter.
[16:52] <Ow3n> The least most common case will be where control data needs to be convoluted with stream data at the sample rate.
[16:53] <Ow3n> Perhaps our differance of oppinion on this stems from the applications we envisage this being used in.
[16:53] <Ow3n> I'm picturing "analogue" synthesizers.
[16:54] <Ow3n> There, you often attach veriable control data to elements...
[16:54] <Ow3n> Often, even filtering the control data (e.g. comb filters)
[16:55] Action: steveb is reorganising his furniture
[16:55] <Ow3n> I've used Reaktor quite alot - which is the best synthesizer software I've seen - and that treats control as an ordinary data stream.
[16:55] <Ow3n> aRts does too I believe.
[16:56] <Ow3n> The thing is there are quite a few applications where there will be a need to filter control data and pass it around from element to element...
[16:57] <steveb> another case you haven't mentioned are very sparse but need to be sample accurate - like note-on events
[16:57] <Ow3n> and that is a non-trivial task. Or, indeed, it would be were it not for the fact that gstreamer is already very good at doing just that.
[16:58] <Ow3n> Yes, that's true. And for those I'd still advise turning into a continuous data stream (albeit at a lower sample rate than the audio)
[16:58] <Ow3n> Otherwise implementing pitch bend will become very complex
[16:58] <steveb> i agree that filtering control data through multiple 'entities' would be handy
[16:59] <Ow3n> Or applying an LFO to the pitch to give a vibrato effect.
[16:59] <Ow3n> I also agree that interpolation would be very handy.
[16:59] <Ow3n> I definately see alot of value in your approach.
[17:00] <steveb> i always wanted to have control data like MIDI flowing between elements - how about this for a proposal...
[17:00] <Ow3n> uh-huh
[17:01] <steveb> we can create elements which deal with control data, control data can move from element to element being transformed. But...
[17:02] <steveb> when that data comes to be actually used to control something, it stops flowing through pads and feeds its data up to the dynparams API
[17:03] <steveb> and any elements which process audio rate data get all its control information from dynparams
[17:04] <steveb> i know erik is dead against elements getting their control data from a pad - although he hasn't said why in the context of this issue
[17:05] <Ow3n> So dynparams is transforming the low-sample rate control data up to the audio sample rate using some specified interpolation method.
[17:06] <steveb> no, it stays at whatever rate it comes as - whether its constant, not, sparse or sparodic
[17:06] <Ow3n> I had some conversations with Erik about that a while ago but I think the main thing he was against was elements receiving control _events_ e.g. EOS through pads.
[17:06] <steveb> ok
[17:07] <Ow3n> Yes, sorry. That's what I meant - I was just thinking of the low sample rate case.
[17:07] <Ow3n> But, yes. This I like much better.
[17:07] <Ow3n> Now, taking the MIDI example...
[17:07] <steveb> yep
[17:07] <Ow3n> A note-on comes in...
[17:08] <Ow3n> It then goes in to a X2 multiplier, i.e. raising the pitch an octave.
[17:08] <Ow3n> Does it come out of the multiplier as a converted MIDI event?
[17:09] <steveb> we could write elements which have midi on src and sink, but it might be better if we convert midi to some internal gstreamer control type
[17:09] <steveb> then do most manipulation on that internal control type
[17:10] <Ow3n> Yes. That's where I believe there's not much value in typing the control data.
[17:10] <steveb> you mean unit of measurement stuff?
[17:10] <Ow3n> That same data could be used to control a oscillator or a volume element.
[17:10] <Ow3n> Yes.
[17:12] <steveb> in this case I agree - the control data is just numbers flying about.  That information makes the job of building GUIs much easier (even ladspa has a subset of what I proposed)
[17:12] <steveb> ...
[17:15] <steveb> but at some point there is an element which takes control data and feeds it to some plugins - this is where things like boundry meta info is useful because the control data can be mapped to what the plugin actually wants (if that is what is required in this case)
[17:17] <Ow3n> Yes. That's true.
[17:17] <Ow3n> I'm just a bit worried that it may be giving developers enough rope to hang themselves though...
[17:18] <steveb> which developers? elements or apps?
[17:19] <Ow3n> For example, one developer could decide that his level input should take a range between 0 and 10. The next, between 0 and 100, the next between 0.0 and 1.0 etc. etc...
[17:20] <Ow3n> My concern is that by allowing too much flexibility in the specification of the control data will provide unneccessary overhead in converting the data between people's personal preferences.
[17:21] <Ow3n> As I see it, there should only be two ranges required, 0 to 1.0 and -1.0 to 1.0
[17:22] <steveb> element developers should specify whatever is most efficient for their application - the whole point is that they don't have to standardise on something which is human-readable (or standard).  The less param scaling that goes on within elements, the less cpu it will use.
[17:25] <Ow3n> Yes, but it's a two-edged sword. Take two elements which _could_ have used any kind of scale but which settled on two different arbitrary scale for their input's and outputs...
[17:25] <Ow3n> When connected together you have the needless scaling problem back.
[17:26] <Ow3n> But I'm not so bothered by that any more :)
[17:27] <Ow3n> After arguing with omega on that one I had to conceed that, at least as far as elements' data streams go, the way it's done now with meta types is better for the common case.
[17:28] <steveb> as long as the boundries are explicit, these discrepencies will be mostly hidden and irrelavent.  Especially if they state their unit of measurement :)
[17:30] <Ow3n> Yes. Anyway, do we agree then that for any element <control, control...> it doesn't use dparams but for any element <control, audio...> it will use dparam to convolute the control data set onto the audio data set.
[17:30] <steveb> yes! who will give omega the good news :)
[17:31] <Ow3n> :)
[17:31] <Ow3n> And, also that the implication from this, that elements can pass control data...
[17:31] <Ow3n> Who will give omega the bad news :)
[17:32] <steveb> pluggable types is what gstreamer is all about, baby
[17:32] <Ow3n> Indeed it is.
[17:33] <Ow3n> I'd better go and get some food now.
[17:33] <Ow3n> Looks like barbeque weather.
[17:34] <steveb> yeah, gotta go too
[17:34] <Ow3n> Catch you l8r
[17:34] Ow3n (owen at ti34a80-0852.bb.online.no) left irc: [x]chat
[17:44] Nick change: steveb -> steveb_not_here
[17:46] taazzzz (dlehn at 66.37.66.32) left irc: Reconnecting
[17:46] taazzzz (dlehn at 66.37.66.32) joined #gstreamer.
[17:46] Nick change: taazzzz -> taaz
[20:01] Nick change: wtay -> wtay-snooker
[21:25] Uraeus (cschalle at c224s9h5.upc.chello.no) joined #gstreamer.
[21:26] <Uraeus> hi
[22:55] omega_ (omega at omegacs.net) joined #gstreamer.
[23:59] Uraeus (cschalle at c224s9h5.upc.chello.no) left irc: syntax error - user imploded
[00:00] --- Sun May 13 2001
[01:08] Nick change: steveb_not_here -> steveb
[01:08] <steveb> yo
[01:08] <ajzzzz> hi
[01:08] Nick change: ajzzzz -> ajmitch
[01:13] <steveb> wow, Douglas Adams has died
[01:17] <steveb> sleep
[01:17] steveb (steveb at node1ee04.a2000.nl) left irc: [x]chat
[02:35] Nick change: ajmitch -> ajbusy
[02:39] Nick change: wtay-snooker -> wtay
[02:52] Nick change: ajbusy -> ajmitch
[03:45] Nick change: wtay -> wtay-zZz
[05:18] kagedal (simon at hepburnix.cs.uoregon.edu) joined #gstreamer.
[06:21] ajmitch (ajmitch at p11-max6.dun.ihug.co.nz) left irc: http://www.freedevelopers.net
[06:24] ajmitch (ajmitch at p11-max6.dun.ihug.co.nz) joined #gstreamer.
[06:24] ajmitch (ajmitch at p11-max6.dun.ihug.co.nz) left irc: Read error to ajmitch[p11-max6.dun.ihug.co.nz]: EOF from client
[06:24] ajmitch (ajmitch at p11-max6.dun.ihug.co.nz) joined #gstreamer.




More information about the gstreamer-devel mailing list