<div dir="ltr">Hi Steve, thanks for the thoughtful response.<br><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Feb 27, 2014 at 3:27 PM, Dodier-Lazaro, Steve <span dir="ltr"><<a href="mailto:s.dodier-lazaro.12@ucl.ac.uk" target="_blank">s.dodier-lazaro.12@ucl.ac.uk</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello Jasper,<br>
<br>
A quick reply on some of your emails (grouped to avoid spamming the ML).<br>
<div class=""><br>
> My experience with PAM and similar "pluggable security modules" is that<br>
> they provide a subpar user experience, are hard to integrate properly into<br>
> the system, and have large pain points that stem from having such<br>
> flexibility.<br>
<br>
</div>I'm really unsure what you mean by subpar UX here. Subpar involves a comparison<br>
with another approach, but then it'd be easier for us to discuss what you think<br>
is wrong in PAM if you told us what you think would work better and why.<br>
Otherwise it's hard for us to see the parallel you're making with WSMs and<br>
understand what it is you're concerned about and what we should do about those<br>
concerns.<br><div class=""></div></blockquote><div><br></div><div>PAM's technical implementation allows a number of modules to be tried in order for authentication. Your API, as a PAM authentication module, is limited to four operations: ask the user a non-secret question (with a textual response), ask the user a secret question (with a textual response), tell the user a piece of information (with no guarantee how this will be displayed, or if it will be erased upon the next request), or tell the user about an error (again, with no guarantee).<br>
<br></div><div>This directly limits the types of authentication you can do: you can't easily do out-of-bound authentication UI like facial detection unless the thing using PAM knows enough about the stack to put up a webcam display.<br>
<br></div><div>In addition, PAM only supports one module at a time, meaning it's impossible to run multiple at the same time (enter your PIN, swipe your fingerprint, or wave a smartcard to unlock); the user first has to fail three times on the PIN before moving to the next module in the stack.<br>
<br>This flexibility on the part of the system builder means it's hard for us to build an API. Additionally, it's hard to do things like automatically log the user in unless we make the assumption that pam_unix is the first entry in the stack, and simply replay the user's password to the first secret question request we get. (This might sound silly, but this is a problem we had with our Initial Setup tool, which already takes the user's password as part of the setup process).<br>
<br></div><div>Obviously, there's nothing fundamentally wrong with the security model of PAM, it's all just in the implementation details. But I'm afraid we're going to run into the same trap here: you're going to write a specification and code for a WSM framework, in which we might pass it not enough data, and the module won't have enough data to determine whether to allow or deny such an application.<br>
<br></div><div>Has this same application spammed the screenshot app more than 10 times in the last minute? How much disk space is it using right now? Has the user tried to kill this application before for misbehaving? Do we have any data on record about this app? Are there updates to this application available?<br>
</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class="">
> My compositor, mutter, will probably never call out to your "WSM", and<br>
> we'll probably defer to another application authorization mechanism,<br>
> probably the same one that provides application sandboxing, and other such<br>
> capabilities. I'd also recommend that you go ahead and talk to the people,<br>
> and perhaps even help build that mechanism, which isn't specific to<br>
> Wayland, but will also cover DBus requests, system calls, and more.<br>
<br>
</div>The idea of security modules is not that you decide which modules to speak to or<br>
what modules do, but rather that a number of operations in your compositor can<br>
cause security issues, and that these operations can be monitored and a policy<br>
enforced over them. What Martin speaks about in his blog post is mostly the<br>
infrastructure allowing this, in short he's thinking about how to implement<br>
complete mediation of the interactions between apps that occur in the graphic<br>
stack. WSMs are then pieces of code that can use these hooks to decide what<br>
policy to implement. To a certain extent we also discuss what we believe would<br>
be a sensible (wrt. the current reasonable requirements of apps found on Linux<br>
systems, usability requirements and security requirements) default policy for a<br>
default WSM to enforce. <br></blockquote><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
It's not really relevant to you what the WSM does or why. A default policy would<br>
make sure that an app cannot spy on another at the graphic stack level (and this<br>
has direct consequences for e.g. when you're typing a password in a dialog - it<br>
may just stay in the corresponding process's memory and never be dumped to disk,<br>
hence a security benefit already). Of course a WSM will never mediate file<br>
system accesses, but that is not the point of it. What matters is that Wayland<br>
and compositors are the only place where you can put infrastructure for graphic<br>
stack security. I'm currently working on implement a system with app sandboxing,<br>
and I have my own subjective and debatable idea of what the policy should be and<br>
I would expect others to disagree with me on this. Yet, what matters to me is<br>
that there is a mediation infrastructure that is flexible and complete enough<br>
for me to enforce the policy I want. It's the case in certain aspects of the<br>
system with SELinux, it's not the case at all in many many areas of userland<br>
(not relevant to this email to discuss that) including the graphic stack (even<br>
XACE does not allow me to implement a simple/easy clipboard access policy). It's<br>
not really ok to state that since other threats exist on a system then it's not<br>
worth finding responses to threats in one's own code base (saying that it's not<br>
necessary to isolate windows in Wayland because they can steal one another's<br>
file is a bit like saying that session passwords are useless because one might<br>
as well steal a computer's HDD and read the data from there. Turns out this<br>
becomes false on the day people fix the HDD-related threat with full-disk<br>
encryption).<br></blockquote><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
The whole goal of discussing WSMs and discussing what a policy could look like<br>
is making sure that we (well, you Wayland developers, actually) don't forget to<br>
mediate some operations that may be exploited to harm users' assets later on --<br>
irrespective of whether other threats in other subsystems of an OS exist.<br>
<div class=""><br>
> Of course. That's why I'd love to have not just a "WSM" but a full<br>
> application authorization system that can be used not only for Wayland<br>
> requests but correspond to full capability management.<br>
<br>
</div>Nothing prevents a WSM from collaborating with a LSM and some other types of<br>
SMs! However without infrastructure for WSMs in the first place, then people<br>
like me would have to dump X11/Wayland/whatever's out there and rewrite a whole<br>
graphic stack. Ouch! We do need to bring security to userland incrementally<br>
because there is so much to fix, but yes the default policy will have to<br>
encompass more aspects than just who shares what GPU buffers.<br>
<div class=""><br>
> DBus is a perfect example. We should allow an app to only see the DBus<br>
> peers that it needs to see. So, org.gnome.Photos should never be able to<br>
> see or call APIs on org.kde.Konqueror or vice versa. But an app might want<br>
> a capability to interface with org.freedesktop.Telepathy. And the same app<br>
> might want access to a privileged wl_notification_shell API so it can<br>
> display a chat window in a special corner of the screen when you get a<br>
> message. And they'd probably want read/write access to "~/Personal<br>
> Data/Chat Logs/" or wherever the user configured their chat logs folder to<br>
> be, without access to "~/Porn/"<br>
<br>
</div>Here you're not speaking about infrastructure but policy. Who decides who's<br>
allowed to use the gnome photo service? Who enforces that?</blockquote><div><br></div><div>In the app sandboxing mechanism we've discussed before, an application can only use the capabilities it's requested. If it requests the "Chat" capability, then I'd assume we'd allow it to use the wl_notification_shell interface along with org.freedesktop.Telepathy, and access "~/Personal Data/Chat Logs".<br>
<br></div><div>It was just a simple example, and the details need fleshing out, but I feel it's a powerful metaphor for the user: instead of asking about the low-level details, it works out a relatively sane high-level definition of what a role would have, and what low-level operations it might encompass.<br>
<br></div><div>This policy certainly could be governed by a WSM, but I feel that implementing a "Wayland Security Module", a "DBus Security Module", and a "Local Filesystem Access Security Module" would be missing the point: these are all system APIs, and should be governed together instead of separately. Allowing me to patch in a WSM that always allows, but keeping the DBusSM the same isn't much help.<br>
</div><div><br></div><div>For a screenshot app, it might want access to wl_screenshooter and "~/Screenshots". These would be governed by some system policy.<br><br><div>I just don't think we need a generic infrastructure with hooks and plugins to enforce a policy.<br>
</div></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">How does the user<br>
manipulate such a system, how can an adversary running their own malicious app<br>
influence on the system and what can they do? It seems to me that in your<br>
example you identify the need to mediate DBus communication (SELinux apparently<br>
does that through LSM)</blockquote><div><br></div><div>As a technical note, kdbus also allows through this through "endpoints", which limits the discoverability of services that an application can see. So it's not just that method calls to any interface on "org.gnome.Photos" are blocked, it's effectively invisible and offline to the application.<br>
<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">and FS access (that's done by most LSMs). Likewise, we<br>
need to secure Wayland so that if people want a policy to prevent keyloggers and<br>
screenshots of your bank PIN, they can do it.<br>
<br>
Thinking about the full surface from which threats may arise does not require<br>
thinking about everything at the same time and trying to cram different systems<br>
with different types of IPC protocols / workflows into one single policy logic.<br></blockquote><div><br></div><div>I'm not so sure. I'd hate to invent a complex set of policies and code for such a "WSM" without thinking about DBus or filesystem access at all, and in the end, we have two different systems and policies for two different things.<br>
<br>DBus already has policy enforcement for system services in a custom XML-based rules language, and it's configured with a completely separate set of tools than SELinux. It's extremely painful to deal with as a system builder, and adding a third pluggable, potentially-different system is exactly what I'd like to avoid.<br>
</div><div><br></div><div>Wayland and DBus are not isolated, they're going to be used in close concert together. That's not hypothetical: our Wayland applications expose a DBus name on them, which we use to find the application menu that we show in the top shell. This is code that I wrote and am running right now.<br>
<br></div><div>Working on solutions that acknowledge this and don't treat Wayland as separate are more valuable to me, a desktop builder.<br><br></div><div>So, let me ask a very technical question: I am writing the code to implement WSMs in mutter. What do I do? Do I scan /usr/lib/wayland-security-modules/, look for .so files, and load them all with dlopen, and then call wsm_module_register on them? If a client requests a privileged operation, do I call wsm_module_can_this_client_do_this_operation in order, looking for an answer from one of them?<br>
<br></div><div>Why would I go through the trouble of loading WSMs when I could simply use the same application sandboxing mechanism in the first place? When we implement sandboxing, we'd probably recommend that system builders don't modify the stock set of WSMs that we ship with, so why allow the indirection?<br>
</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
You can already abstract from all of that and provide reasonable standards of<br>
security/usability in each subsystem whilst assuming that other consciencous<br>
developers will do the same with their own code (or alternatively, that<br>
consciencious distributors will build a consistent UX with decent security, by<br>
combining only those subsystems that allow for a security policy to exist).<br>
<br>
When you develop mutter as a compositor, you may want to have it obey whatever<br>
the WSM does because maybe you cannot predict who will use mutter and for what,<br>
and what security policy they will need. If you hardcode a policy in mutter, or<br>
just if mutter decides what parts of a policy to obey and what parts to ignore,<br>
how can a security expert / distributor / user reliably enforce a policy? Will<br>
you provide your own policy edition interface? How will it differ from a WSM?<br>
Would it not be better to share collective experience and know-how and make<br>
something that works for everyone now and hopefully in 20 years time?<br></blockquote><div> </div></div><div class="gmail_quote">I'd love to prevent yet-another-security-nightmare for users.<br><br>Convincing people to adjust their SELinux policy using and not copy-paste "setenforce 0" into their terminal is hard enough. I'd hate to see people that simply want to stream some Portal 2 have to read some tutorial that tells them to do "rm /usr/lib/wayland-security-modules/*" to get their job done.<br>
</div><div class="gmail_quote"><br></div><div class="gmail_quote">Having an all-in-one solution that acknowledges that DBus, Wayland, and library and system calls are all tools in one giant API is a lot better to me than implementing separate policy enforcement mechanisms that might be inconsistent and conflict with another. It makes me more sure of the user experience that we can deliver and of the security of it.<br>
<br>Keep in mind that Wayland and DBus are both IPC systems, and apps might use either technology to get the job done. The reason Wayland wasn't built on DBus was because of the inefficiency of the DBus daemon. With kdbus, this is solved. I've actually written a working prototype for a kdbus transport for Wayland. So, I'd hate to have a lot of code to enforce policies that can be used for one half of the IPC on a system, but not the other.<br>
</div><div class="gmail_quote"><br>I'm fine with the WSM concept in theory. While I probably wouldn't like GNOME's security system being pluggable, but I
won't reject a patch that adds the feature for people who have a need for such functionality.<br><br>I'd really just prefer it not to be Wayland-specific. In fact, if you have a pluggable system that I think is OK enough to limit support to other system APIs, then I'll write the patch for mutter. Deal?<br>
<br></div><div class="gmail_quote">The main reason I wrote this email was because you said "if there were no complaints, we're going to start writing the code now". It's not that I don't like what you're doing or think that a security interface is absolutely bad, I'd just hate for you to rush into the implementation and not think about it for a little while longer.<br>
</div><div class="gmail_quote"><br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Finally what prevents you from putting your GNOME distributor hat later on and<br>
implementing the WSM that you think is best for GNOME users? This way you'd<br>
still let downstream projects adjust to their needs or to whatever threats occur<br>
in the future. Maybe a distributor/developer like me will have a different logic<br>
to their security policy and want to have their own WSM and then they will be<br>
responsible for providing a good UX with their policy and your software.<br></blockquote><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Thanks,<br>
--<br>
Steve Dodier-Lazaro<br>
PhD student in Information Security<br>
University College London<br>
Dept. of Computer Science<br>
Malet Place Engineering, 6.07<br>
Gower Street, London WC1E 6BT<br>
OpenPGP : 1B6B1670<br>
<div class=""><div class="h5">_______________________________________________<br>
wayland-devel mailing list<br>
<a href="mailto:wayland-devel@lists.freedesktop.org">wayland-devel@lists.freedesktop.org</a><br>
<a href="http://lists.freedesktop.org/mailman/listinfo/wayland-devel" target="_blank">http://lists.freedesktop.org/mailman/listinfo/wayland-devel</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br> Jasper<br>
</div></div>