[Accessibility] TTS API interface description + reqs update

Jonathan Duddington jsd at clara.co.uk
Sat Apr 29 14:47:17 PDT 2006


In article <1146313875.4101.138.camel at chopin>,
   Hynek Hanke <hanke at brailcom.org> wrote:

> * We decided points A.4.9-A.4.10 speaking about events and index
> marks should be reworked as they were not clear enough. Willie
> Walker suggested that apart from words, there should also
> be sentence events. Please see their current version for more details.

1.  Is it useful to have both word-start and word-end events?
And both sentence-start and sentence-end?

2.  A say_text() call can speak the text from a specified word or
sentence. Would speak from a specified character position also be
useful?  Consider an application where the user places a caret in the
text and asks to speak from that position onwards.  The application
can't ask to speak from a word number, because its counting of words
may differ from the synthesizer's.  Of course, it could just send the
text from that position, but if the text has SSML then it would be more
convenient for the application to send the whole <speak> </speak> block
so that it can leave the job of parsing the SSML tags to the
synthesizer.

-- 


More information about the accessibility mailing list