[pulseaudio-discuss] Multiple users (kde) on Debian

Halim Sahin halim.sahin at freenet.de
Wed Aug 25 08:25:55 PDT 2010

On Mi, Aug 25, 2010 at 03:46:56 +0100, Colin Guthrie wrote:
> > FWIW, I agree that's the best aproach.
> > But aren't you PA guys actively fighting this idea? You strongly advise against
> > system mode. 
> No, you're misunderstanding what I'm suggesting.
> I'm not referring to PA, I'm talking about your speech system.
> Halim said previously:
> > A big problem is that the same user has to start several instances of
> > sbl on every console which is hard to implement.
> This is what I was referring to when I said you should not run several
> instance of it. Your systems should be split cleanly so that they
> provide a single system wide service that gathers all the necessary
> information (i.e. what is on screen to go via tts), but they should
> *not* actually play the speech they generate, but rather *make it
> available* to whomever wants to consume it.

That's the point how should the screenreader (sbl) put it's information
for later connection consuments?
The information which is available from the screenreader should be
processed immediately.

> Users (or pseudo users) will run lightweight agents which can connect to
> this system-wide resource and play it accordingly.

Well we don't have such a tts system.
Just for clarification:
Not the tts system is responsible for connecting the screenreader.
The screenreader connects to a supported text-to-speech system.
And that isn't possible due to the new design in the startup process.

When we start first a pseudo session on plain consoles (dont know) if
this is possible, then the screenreader could connect to it at tthat
The connection will be invallid when the user logs in.
The running speech-dispatcher from the pseudo session 
would loose access to pulseaudio, the pulseaudio started at that time
will suspend and a new pulseaudio takes control over audiosystem in the
user's context.
Now the screenreader won't be able to output anything because it's
connection isn't valid.
Well starting the screenreading in the user process is not
Connecting a running speech-dispatcher from the usercontext would be
possible if you allow inet socket connection but there is an aditional
restart of the screenreading needed any way.

Maybe you can understand now that the things started to be a problem for
us after pa replaced the audiosystem in most popular distros.
And mails like:
Shows us that we were not understood.
_We_ need and we _Want plain textconsoles!

> > I'm seriously confused as to whether you're telling Halim here "you need more
> > effort than just dmix" or in fact "PA is not (and won't ever be) for you".
> I'm saying that the way that the tts systems are working need to be revised.
And who should implement this?
you have pushed the new approach without any backward compatibility
So all other projects needs to do hard work to get their projects
working in current distros.

> You can either run separate full systems for each login psuedo session,
> or you can split things out into server-client model whereby the server
> runs as a system service and agents, running as users, connect to it and
> playback the sound (and/or push visuals onto the X11 window). The agents
> run as the local user (real or pseudo) and no unauthorised process has
> access to the sound or display h/w when they shouldn't.

that is the point.
1. is not possible Screenreader running on textcsonole can't be work in
2. The screenreader is a client not a server.
The tts system is a server and must be connected from the screenreader.

> This is a model that gives users control. Don't want the a11y then kill
> the agent. It is not forced upon the user by the sysadmin. In the case
> of login pseudo sessions, then yes, this is a sysadmin choice, but
> that's acceptable IMO.
BTW.: I am not aware that canonical does work on the
They simply mooved tts system to userspace and that's it.

More information about the pulseaudio-discuss mailing list