[RFC] flink_to

Dave Airlie airlied at redhat.com
Mon Jul 12 15:44:22 PDT 2010


On Mon, 2010-07-12 at 11:55 -0400, Kristian Høgsberg wrote:
> [ Let's try this again... ]
> 
> Ok, so the flink_to discussion rat-holed a bit on the binary blob
> attachment issue.  But before we even get to that, there are a lot of
> issues that I'd some feedback on as well.  There were a few bugs in
> the patch that I've fixed, but I don't see the point in sending it out
> again just yet, as I'd like to see if we can agree on some of the
> higher level issues.
> 
> First of, I'm hoping we can get this ready for the next merge window.
> The patch is written and I've tested it here with a libdrm test case
> and am currently finishing the EGL support for this.  One change I
> might want to do is to add a blob type argument to the ioctls so
> userspace has a standardized way of indication what the format of the
> data is (what Keith pointed out).  That's a fairly simple change and
> the patch itself is simple enough to begin with, so I don't expect a
> lot of tricky issues with the implementation.
> 
> Along with the flink_to ioctl, I'm proposing that we drop the DRM_AUTH
> requirement for accessing the gem ioctls.  Specifically, for intel,
> I'm suggesting that we drop the DRM_AUTH requirement on
> 
>   DRM_IOCTL_GEM_FLINK,
>   DRM_IOCTL_GEM_OPEN,
>   DRM_I915_GEM_EXECBUFFER,
>   DRM_I915_GEM_EXECBUFFER2,
>   DRM_I915_GEM_BUSY,
>   DRM_I915_GEM_THROTTLE,
>   DRM_I915_GEM_CREATE,
>   DRM_I915_GEM_PREAD,
>   DRM_I915_GEM_PWRITE,
>   DRM_I915_GEM_MMAP,
>   DRM_I915_GEM_MMAP_GTT,
>   DRM_I915_GEM_SET_DOMAIN,
>   DRM_I915_GEM_SW_FINISH,
>   DRM_I915_GEM_SET_TILING,
>   DRM_I915_GEM_GET_TILING,
>   DRM_I915_GEM_GET_APERTURE,
>   DRM_I915_GEM_MADVISE,
> 
> which should allow clients to create buffers, submit rendering against
> them and share them with a priviledged (in the sense that it controls
> scanout) display server.  Access to these ioctls will then only be
> guarded by the permissions on /dev/dri/cardX, which all distros
> restrict to the 'local' user (that is, excluding ssh and similar) in
> one way or another.  How do we feel about that?  Maybe it's something
> the master needs to request, so that only X servers that use flink_to
> will activate this mode?

I'd rather we only did this if we knew everyone was going to use
flink_to, and then make sure normal flink doesn't work at all once
someone started using flink_to perhaps.

But otherwise it all sounds good.

> 
> flink_to doesn't in itself solve the security problem, since user
> space can still submit a batch buffer that reads or writes to an
> absolute gtt offset (that is, no relocation).  The X front buffer
> location is typically pretty predictable, for example.  flink_to does
> give us the infrastructure to implement a secure system though.  There
> are several ways this could be done: use a sw command checker to
> reject absolute gtt offsets, unbind buffers from all other clients
> before running executing the commands or use per-process gtt or
> similar hw support.

Doesn't solve the security problem for *Intel*. On radeon for example
we've always provided this type of security, GEM's interface is the only
hole in that case (apart from the sw checker maybe missing some cases).
So I'm quite happy that this is what we'd prefer.

> Then there's the data passing mechanism part of flink_to.  I'm
> suggesting that we allow applications to attach a blob to an flink_to
> name, which will be passed to the process calling open_with_data on
> the name.  The format of the blob is defined by userspace, typically
> libdrm or mesa, and lets us marshal meta data about the buffer along
> with granting access to the buffer.  And just to be clear, the kernel
> has no need for this meta data, it doesn't even understand the format.
> But it will make protocoles and user level APIs simpler, and it's not
> going to be a resource drain in the kernel.  There's a 2k max size on
> the attached data, and a buffer can only have one flink_to name
> pending per file_priv.  I didn't see any strong objections in the
> thread, but I understand the concerns.  We're debating a minimal
> kernel API with a kludgey userspace vs kernel API with convenience
> features and much simpler userspace.

Its really ugly, and its really going to end up as ABI, except the
people hacking on the X server will forget that the people hacking on
mesa need to have the same struct or some such as they will think they
are giving the data to the kernel. I really get the feeling this would
work better in userspace, or with at least a format that works in the
kernel. Is the data going to be per-GPU? per-driver? per-what? Who is
expecting to interpret it in userspace? what happens if in a few months
you realise you need 4k. (2k is pointless, since it'll eat a page
anyways). Yes its meta-data to the kernel, but flink_to is a generic
userspace interface and attaching a bunch of non-generic data to it
sounds hackish.

Dave.




More information about the dri-devel mailing list