[Accessibility] Direction of the work (was: Updated requirements
document)
Luke Yelavich
linuxaccess@themuso.com
Sat Jan 8 15:46:00 PST 2005
On Sun, Jan 09, 2005 at 01:50:10AM EST, Milan Zamazal wrote:
> LY> On Fri, Jan 07, 2005 at 03:34:02AM EST, Milan Zamazal wrote:
>
> >> c. Everyone expects that someone else steps in and does the work.
> >> (Which is a typical deadlock situation, so guess what happens in
> >> such a case...)
>
> LY> I would be more tan happy to help with accessibility related
> LY> work, but I am by no means a doc writer.
>
> That's no problem, you surely know what you are good in and will know
> when you can help with something.
>
> >> d. People forgot to subscribe here. (But I can see 23
> >> subscribers here, including most people from the former private
> >> conversation.)
>
> LY> I thought therere would have been many more than that. :)
>
> Maybe we've forgotten to announce the existence of this list somewhere?
When I heard about it, I don't remember hearing about it from all the
blind/accessibility related lists I am on. I could easily let people
know on other lists such as the speakup and blinux lists.
> LY> it if fine to work out a common input format, as this allows for
> LY> proper multilingual support, etc. However I think a common
> LY> speech management system or backend should be looked at first,
> LY> before working out how all software communicates with it. The
> LY> chosen backend would need to be flexible to add new features
> LY> should a synthesizer need them, easy to configure, etc. I can't
> LY> think of them all right now.
>
> The problem is that in order to implement a working speech management
> system, we also need the speech synthesizer interface. On the other
> hand, the speech synthesizer interface can be useful for more purposes.
> This is one of the reasons why I think we need to work on the TTS API
> first. Additionally, gnome-speech, KTTSD, Speech Dispatcher all have
> some sort of TTS APIs now, so if we can implement a common TTS API
> superior to all of them, the benefit will be immediate even for the end
> users.
Point taken. Are the authors of all such systems on this list? If not, I
think we need to get them all on here to have their input.
> LY> At the moment we have several backends. gnome-speech,
> LY> speech-dispatcher, emacspeak servers, and there are probably
> LY> more that I have missed. This means that no single screen reader
> LY> supports all synthesizers, especially with Speakup, which has
> LY> its own drivers as kernel code. Out of these, at this stage I
> LY> think speech-dispatcher is the best choice. It is then a matter
> LY> of porting all other synthesizer drivers to the
> LY> speech-dispatcher framework.
>
> This may be a good way, but Speech Dispatcher's output modules (speech
> synthesizer drivers) need to be improved anyway. We can't expect a good
> speech synthesizer driver set to be written and maintained without wide
> agreement. And wide agreement, including not duplicating work now nor
> in future, means that TTS API is not a single purpose thing (whatever
> the purpose is).
Indeed we certainly don't want to duplicate work. That is what has been
happening for the most part up until now.
> LY> I intend to reread the document as stated above, and will post
> LY> again if I think of anything. I would also appreciate it if my
> LY> backend suggestion would be considered, although this is
> LY> probably not something to be worked out on this list.
>
> I think this list is not limited to TTS API and is intended for work on
> other accessibility standards and common solutions in future.
Agreed.
Luke
More information about the Accessibility
mailing list