[Accessibility] Re: Updated requirements document
Luke Yelavich
linuxaccess@themuso.com
Thu Jan 6 12:46:48 PST 2005
On Fri, Jan 07, 2005 at 03:34:02AM EST, Milan Zamazal wrote:
> Hello all,
>
> I'd like to ask whether there is still interest among us in building the
> common speech output system, in the sense of active participation? I'm
> not sure how to interpret the silence here last weeks after Willie
> Walker and I have rewritten the requirements document. Possible
> interpretations are:
>
> a. The issue is no longer interesting. (Unlikely.)
This is for me at least certainly not the case. Linux accessibility can
NOT move forward till this is solved once and or all.
> b. The issue appeared to be too difficult to solve. (Are separate
> incompatible solutions easier?)
Separate incompatible sollutions are not in my mind, not the way to go.
Read on below.
> c. Everyone expects that someone else steps in and does the work.
> (Which is a typical deadlock situation, so guess what happens in such
> a case...)
I would be more tan happy to help with accessibility related work, but I
am by no means a doc writer.
> d. People forgot to subscribe here. (But I can see 23 subscribers here,
> including most people from the former private conversation.)
I thought therere would have been many more than that. :)
> e. The rewritten requirements document is too difficult to read. (Why
> not to ask for clarification to help it improve then?)
I didn't think so, although I might have to reread it again to get an
idea of whether I can follow it.
> f. There are great ideas among us, but we haven't managed to present
> them here yet and to contribute to the requirements document so that
> it could be finished. (Please speak up if this is the case!)
I guess this is kind of the case. let me explain.
it if fine to work out a common input format, as this allows for proper
multilingual support, etc. However I think a common speech management
system or backend should be looked at first, before working out how all
software communicates with it. The chosen backend would need to be
flexible to add new features should a synthesizer need them, easy to
configure, etc. I can't think of them all right now.
At the moment we have several backends. gnome-speech, speech-dispatcher,
emacspeak servers, and there are probably more that I have missed. This
means that no single screen reader supports all synthesizers, especially
with Speakup, which has its own drivers as kernel code. Out of these, at
this stage I think speech-dispatcher is the best choice. It is then a
matter of porting all other synthesizer drivers to the speech-dispatcher
framework.
Once this is done, we can look at input formats, and how software that
uses speech can communicate.
> Well? What do you think about that all? ;-)
I intend to reread the document as stated above, and will post again if
I think of anything. I would also appreciate it if my backend suggestion
would be considered, although this is probably not something to be
worked out on this list.
Luke
More information about the Accessibility
mailing list