Summary of the security discussions around Wayland and privileged clients
s.dodier-lazaro.12 at ucl.ac.uk
Thu Feb 27 18:36:52 PST 2014
> Hi Steve, thanks for the thoughtful response.
> PAM's technical implementation allows a number of modules to be tried in
> order for authentication. Your API, as a PAM authentication module, is
> limited to four operations: ask the user a non-secret question (with a
> textual response), ask the user a secret question (with a textual
> response), tell the user a piece of information (with no guarantee how this
> will be displayed, or if it will be erased upon the next request), or tell
> the user about an error (again, with no guarantee).
> This directly limits the types of authentication you can do: you can't
> easily do out-of-bound authentication UI like facial detection unless the
> thing using PAM knows enough about the stack to put up a webcam display.
> In addition, PAM only supports one module at a time, meaning it's
> impossible to run multiple at the same time (enter your PIN, swipe your
> fingerprint, or wave a smartcard to unlock); the user first has to fail
> three times on the PIN before moving to the next module in the stack.
On one hand that is a problem, but on the other hand if a module were run in
parallel with another one, a policy would be required to decide what to do when
one returns "Yes" and the other returns "No". Typically face detection or
fingerprint detection is not so hard to spoof, and an attacker would always go
for the easy way in if users were to choose which PAM module to use to
authenticate. I'm getting offtopic though!
> This flexibility on the part of the system builder means it's hard for us
> to build an API. Additionally, it's hard to do things like automatically
> log the user in unless we make the assumption that pam_unix is the first
> entry in the stack, and simply replay the user's password to the first
> secret question request we get. (This might sound silly, but this is a
> problem we had with our Initial Setup tool, which already takes the user's
> password as part of the setup process).
> Obviously, there's nothing fundamentally wrong with the security model of
> PAM, it's all just in the implementation details. But I'm afraid we're
> going to run into the same trap here: you're going to write a specification
> and code for a WSM framework, in which we might pass it not enough data,
> and the module won't have enough data to determine whether to allow or deny
> such an application.
I would state that the issue here is that you are allowed to provide modules
that make a local security decision (on an auth factor) but that cannot also
decide on the global policy. If that's a correct translation of your concern,
then I believe I understand what you're worried about. I'll explain my personal
view on what a WSM should provide below.
> Has this same application spammed the screenshot app more than 10 times in
> the last minute? How much disk space is it using right now? Has the user
> tried to kill this application before for misbehaving? Do we have any data
> on record about this app? Are there updates to this application available?
I would rather say then that you want to make decisions on applications at a
higher level than just the compositor, possibly using compositor information
in the process. That's fine, see below!
> > Here you're not speaking about infrastructure but policy. Who decides who's
> > allowed to use the gnome photo service? Who enforces that?
> In the app sandboxing mechanism we've discussed before, an application can
> only use the capabilities it's requested. If it requests the "Chat"
> capability, then I'd assume we'd allow it to use the wl_notification_shell
> interface along with org.freedesktop.Telepathy, and access "~/Personal
> Data/Chat Logs".
> It was just a simple example, and the details need fleshing out, but I feel
> it's a powerful metaphor for the user: instead of asking about the
> low-level details, it works out a relatively sane high-level definition of
> what a role would have, and what low-level operations it might encompass.
I have rather strong opinions on how and what to sandbox but they would be very
offtopic, I'd love to chat about this with you though if you're interested in
the topic. Shortly it seems to me you'd be interested in FBAC (http://schreuders.org/FBAC-LSM/).
I personally dislike the idea of per-app sandboxing and feel uneasy with having
to label apps with any kind of information on what they can be used for, though
that's already quite flexible and easy to setup. I just don't trust packagers to
make security/usability decisions for me.
> This policy certainly could be governed by a WSM, but I feel that
> implementing a "Wayland Security Module", a "DBus Security Module", and a
> "Local Filesystem Access Security Module" would be missing the point: these
> are all system APIs, and should be governed together instead of separately.
> Allowing me to patch in a WSM that always allows, but keeping the DBusSM
> the same isn't much help.
> For a screenshot app, it might want access to wl_screenshooter and
> "~/Screenshots". These would be governed by some system policy.
> I just don't think we need a generic infrastructure with hooks and plugins
> to enforce a policy.
If you want to choose to give access to any kind of feature to any app then you
need to hook on it, and on any IPC to privileged processes that might be used as
confused deputies, at least. In your view, what would be the ideal approach to
monitoring access to wl_screenshooter?
> I'm not so sure. I'd hate to invent a complex set of policies and code for
> such a "WSM" without thinking about DBus or filesystem access at all, and
> in the end, we have two different systems and policies for two different
> DBus already has policy enforcement for system services in a custom
> XML-based rules language, and it's configured with a completely separate
> set of tools than SELinux. It's extremely painful to deal with as a system
> builder, and adding a third pluggable, potentially-different system is
> exactly what I'd like to avoid.
> Wayland and DBus are not isolated, they're going to be used in close
> concert together. That's not hypothetical: our Wayland applications expose
> a DBus name on them, which we use to find the application menu that we show
> in the top shell. This is code that I wrote and am running right now.
> Working on solutions that acknowledge this and don't treat Wayland as
> separate are more valuable to me, a desktop builder.
> So, let me ask a very technical question: I am writing the code to
> implement WSMs in mutter. What do I do? Do I scan
> /usr/lib/wayland-security-modules/, look for .so files, and load them all
> with dlopen, and then call wsm_module_register on them? If a client
> requests a privileged operation, do I call
> wsm_module_can_this_client_do_this_operation in order, looking for an
> answer from one of them?
> Why would I go through the trouble of loading WSMs when I could simply use
> the same application sandboxing mechanism in the first place? When we
> implement sandboxing, we'd probably recommend that system builders don't
> modify the stock set of WSMs that we ship with, so why allow the
An app sandboxing will require entire graphic-stack-level isolation anyway,
and you can be sure there will be plethora of perfectly legitimate reasons for a
user to want an application to bypass its sandbox and access another app's files
or share data with other apps. My concern with any fine-grained sandboxing is in
figuring out who will maintain the rules, and in particular I don't wanna see
situations where developers of apps or packagers are trusted to decide what
capabilities an app should be given because the former will lie to propagate
malware and the latter will not pay attention. Especially, custom-installed apps
would get their configuration straight from the app (and these are the most
likely to be malicious). No, I don't buy the idea that a user would review the
capabilities of an app before installing it. If it were true nobody would ever
have installed Angry Birds on their phone.
So rather than describing what an app can do, I would rather decide in situation
whether it makes sense that an app accesses a service based on what information
I have. My approach for my own system would also redirect information from the
compositor onto a larger decision engine that does some FS access control, and
isolation (of groups of processes rather than single processes). Discussing that
further would be offtopic but for me what matters here is that any solution
would require some control over the compositor's action (hence the compositor
must be asking for permissions somewhere or taking orders from someone) and also
that the compositor communicates some information to the decision engine.
Now, where would WSMs stand in all that? (Disclaimers: this is my opinion, and
certainly not a definitive statement! Besides I don't know anything to the
Wayland internals compared to you or Martin!) I believe there should be an
nterface of functions to be implemented to qualify as a WSM, and there could be
one WSM running at a time. The WSM may be a standalone .so or belong to a larger
program that implements a DBus/kdbus interface, as long as there is a clear and
unique trusted path to that WSM.
That interface would allow the WSM to react to 1) receive information about
whatever events occur in a compositor / window manager, about privileged
interface requests, usage of clipboards (and generally speaking whatever in XACE
is still relevant to Wayland), etc. ; 2) react to them by informing the
compositor that they cause a violation of the security policy and by providing
feedback to be handed over to the user ; and 3) by giving instructions to the
compositor later on if necessary, after the engine implementing the interface
has used external information to make a decision about a windowed application
(path of process, open files, plus whatever semantics your DE offers you on
applications (user reporting an app as annoying, history of app interactions,
On the other hand, you could have a standalone policy that already limits the
harm that can be done strictly speaking at the windows level - namely spying on
other apps by restricting access to screenshots, virtual keyboards, etc, or that
prevents modal fullscreen windows...
We may also think that it's a good idea to let compositors expose some of their
own internals or some methods to control windows, or window decorations (used by
Qubes OS for instance, and I would for sure make use of the ability to modify a
window's decorations in my own project). I'd see no issue at all in that as a
way to extend the security module's interface if needed (but I'd like to hear
more educated opinions about the matter).
One unrelated (or maybe not) point I'd like to make on this list is that from
the layman user's perspective (which is not accurate but also hardly
challengeable), s/he does not observe processes running and then spawning an
arbitrary number of windows and doing IPC in any uncontrolled way, but windows
being opened and closed (plus some magic apps providing services without a
window!). I don't think that it's reasonable to expect a user to conceptualise
their security requirements elsewhere than around their own data and what's
directly under their nose: windows. So I would assume that a good security UX
would articulate rules and feedback around the windows rather than the actual
processes/apps. This has a nasty implication if you think of the forms of
multitasking that need to be supported, that several windows may belong to the
same process, and that you don't quite allocate privileges and access to files
to a window but to a process.
> Having an all-in-one solution that acknowledges that DBus, Wayland, and
> library and system calls are all tools in one giant API is a lot better to
> me than implementing separate policy enforcement mechanisms that might be
> inconsistent and conflict with another. It makes me more sure of the user
> experience that we can deliver and of the security of it.
I think the point we're arguing on is whether there should be a Wayland-stuff/
compositor only policy shipped with the interface allowing complete mediation of
the graphic stack, or whether there shouldn't. As far as I'm concerned, it can
mitigate perfectly valid and harmful threats to have a sound default policy on
how windows interact with one another, provided ofc. that no other capabilities
allow the same threats to exist elsewhere. I don't think however that a composi-
tor alone will ever be in a position to know what else exists on the user's OS
and I don't think either that it's the goal of a project like Wayland to
redefine even how file systems and IPC work on Linux/*NIX systems in the name of
security. I think it's more tractable to go for easy goals that are within
direct reach of this project, and to provide enough infrastructure to help those
like you or me who believe new desktops can be designed with higher security
I'm not trying to go anywhere further when it comes to setting best practice
because I doubt even as many as two people will agree on what the security-
usability requirements should be, because different DEs/contexts of use impact
the default policy to be used, etc. I find it easier to fork projects, hack my
code into them, build a PoC desktop and evaluate its security and usability.
Then I can come and claim that I found how to provide security on Linux desktops.
> The main reason I wrote this email was because you said "if there were no
> complaints, we're going to start writing the code now". It's not that I
> don't like what you're doing or think that a security interface is
> absolutely bad, I'd just hate for you to rush into the implementation and
> not think about it for a little while longer.
That's not what *I* have said, I would actually rather refine my own idea of how
the whole desktop's security model should work before I gave a final draft of
what I'd like to see in WSMs, and I also believe others with different ideas of
security should do the same so that we improve the odds of having flexible
enough an interface.
PhD student in Information Security
University College London
Dept. of Computer Science
Malet Place Engineering, 6.07
Gower Street, London WC1E 6BT
OpenPGP : 1B6B1670
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the wayland-devel