New subsystem for acceleration devices
Jason Gunthorpe
jgg at nvidia.com
Tue Aug 9 12:18:51 UTC 2022
On Tue, Aug 09, 2022 at 10:32:27AM +0200, Arnd Bergmann wrote:
> On Tue, Aug 9, 2022 at 10:04 AM Christoph Hellwig <hch at infradead.org> wrote:
> > On Tue, Aug 09, 2022 at 08:23:27AM +0200, Greg Kroah-Hartman wrote:
> > > > This is different from the number of FDs pointing at the struct file.
> > > > Userpsace can open a HW state and point a lot of FDs at it, that is
> > > > userspace's problem. From a kernel view they all share one struct file
> > > > and thus one HW state.
> > >
> > > Yes, that's fine, if that is what is happening here, I have no
> > > objection.
> >
> > It would be great if we could actually life that into a common
> > layer (chardev or vfs) given just how common this, and drivers tend
> > to get it wrong, do it suboptimal so often.
>
> Totally agreed.
>
> I think for devices with hardware MMU contexts you actually want to
> bind the context to a 'mm_struct', and then ensure that the context
> is only ever used from a process that shares this mm_struct,
> regardless of who else has access to the same file or inode.
I can't think of a security justification for this.
If process A stuffs part of its address space into the device and
passes the FD to process B which can then access that addresss space,
how is it any different from process A making a tmpfs, mmaping it, and
passing it to process B which can then access that address space?
IMHO the 'struct file' is the security domain and a process must be
careful to only allow FDs to be created that meet its security needs.
The kernel should not be involved in security here any further than
using the standard FD security mechanisms.
Who is to say there isn't a meaningful dual process use case for the
accelerator? We see dual-process designs regularly in networking
accelerators.
Jason
More information about the dri-devel
mailing list