DRM security flaws and security levels.
Rob Clark
robdclark at gmail.com
Mon Apr 14 06:09:31 PDT 2014
On Mon, Apr 14, 2014 at 8:56 AM, Thomas Hellstrom <thellstrom at vmware.com> wrote:
> On 04/14/2014 02:41 PM, One Thousand Gnomes wrote:
>>> throw out all GPU memory on master drop and block ioctls requiring
>>> authentication until master becomes active again.
>> If you have a per driver method then the driver can implement whatever is
>> optimal (possibly including throwing it all out).
>>
>>> -1: The driver allows an authenticated client to craft command streams
>>> that could access any part of system memory. These drivers should be
>>> kept in staging until they are fixed.
>> I am not sure they belong in staging even.
>>
>>> 0: Drivers that are vulnerable to any of the above scenarios.
>>> 1: Drivers that are immune against all above scenarios but allows any
>>> authenticated client with *active* master to access all GPU memory. Any
>>> enabled render nodes will be insecure, while primary nodes are secure.
>>> 2: Drivers that are immune against all above scenarios and can protect
>>> clients from accessing eachother's gpu memory:
>>> Render nodes will be secure.
>>>
>>> Thoughts?
>> Another magic number to read, another case to get wrong where the OS
>> isn't providing security by default.
>>
>> If the driver can be fixed to handle it by flushing out all GPU memory
>> then the driver should be fixed to do so. Adding magic udev nodes is just
>> adding complexity that ought to be made to go away before it even becomes
>> an API.
>>
>> So I think there are three cases
>>
>> - insecure junk driver. Shouldn't even be in staging
>> - hardware isn't as smart enough, or perhaps has a performance problem so
>> sometimes flushes all buffers away on a switch
>> - drivers that behave well
>>
>> Do you then even need a sysfs node and udev hacks (remembering not
>> everyone even deploys udev on their Linux based products)
>>
>> For the other cases
>>
>> - how prevalent are the problem older user space drivers nowdays ?
>>
>> - the fix for "won't fix" drivers is to move them to staging, and then
>> if they are not fixed or do not acquire a new maintainer who will,
>> delete them.
>>
>> - if we have 'can't fix drivers' then its a bit different and we need to
>> understand better *why*.
>>
>> Don't screw the kernel up because there are people who can't be bothered
>> to fix bugs. Moving them out of the tree is a great incentive to find
>> someone to fix it.
>>
>
> On second thought I'm dropping this whole issue.
> I've brought this and other security issues up before but nobody really
> seems to care.
I wouldn't say that.. render-nodes, dri3/prime/dmabuf, etc, wouldn't
exist if we weren't trying to solve these issues.
Like I said earlier, I think we do want some way to expose range of
supported security levels, and in case multiple levels are supported
by driver some way to configure desired level.
Well, "range" may be overkill, I only see two sensible values, either
"gpu can access anyone's gpu memory (but not arbitrary system
memory)", or "we can also do per-process isolation of gpu buffers".
Of course the "I am a root hole" security level has no place in the
kernel.
BR,
-R
> /Thomas
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
More information about the dri-devel
mailing list