[PATCH 0/3] [RFC] DRM Render Nodes

Daniel Vetter daniel.vetter at ffwll.ch
Fri Sep 28 11:52:49 PDT 2012


On Fri, Sep 28, 2012 at 8:42 PM, Ilija Hadzic
<ihadzic at research.bell-labs.com> wrote:
>
>
> On Fri, 28 Sep 2012, Daniel Vetter wrote:
>
>> On a quick look the rendernode Kristian propose and your work seem to
>> attack slightly different issues. Your/Dave's patch series seems to
>> put large efforts into (dynamically) splitting up the resources of a
>> drm device, including the modeset stuff.
>
>
> Correct, the goal is to be able to run multiseat while sharing a GPU.
> Actually, with my variant of render nodes, I even got multiple desktops
> residing in different LXC containers to share the GPU, which is kind of
> cool.
>
>
>> Kristians proposal here is
>> much more modest, with just enabling a way for to do the same for
>> render clients. All the modeset (and flink open stuff) would still be
>> only done through the legacy drm node.
>>
>
> OK I see. From what I can tell from the second patch, drm_get_pci_dev will
> create one (and I guess only one, right ?) render node if the underlying
> driver has DRIVER_RENDER node feature. The third patch (among other things)
> adds that feature to Intel driver.
>
> So if I boot up a system with these patches and with Intel GPU, I will
> automagically get one more /dev/dri/renderD128 node, right ? The intent is
> that the render client opens and uses that render node. The
> /dev/dri/controlDNN node still remains an unused "orphan", right ?

Yeah, the plan is to just have one single render node and ensure
insulation by not allowing any open file of that render node to access
any other buffer not associated to the file_prive. Like I've said, the
current patches have a little hole wrt mmap handling there ;-)

The only way to share buffers is via dma_buf (which is fd based, so we
could attach full selinux contexts if required) or flink (but not
opening an flink name, that requires master rights on the legacy
node).

> So would you entertain the possibility that the render node is created from
> user space on demand using an ioctl into the control node ? If that's a
> possiblity for you, then my set of patches is a superset of what Kristian
> needs. If you just need a render client, you can create a node with no
> display resources and you would get something quite close to what these 3
> patches try to do..... unless I am missing something.

Well, dynamically creating render nodes is not required just for
insulating different render clients. The goal is very much to allow
background/headless usage of the gpu, e.g. for opencl and video
encode/decode. So first asking a central deamon to span another render
node just to transcode another video isn't that great. Obviously the
security separation only works if the gpu actually supports different
vm address spaces for each node ...

The multi-seat issue is imo orthogonal to that and I don't think we
should mangle (like you've noticed, ppl seem to get scared about it
and not push those patches too much). And with new stuff like atomic
modeset and gpus having a lot of shared resources in the display hw
(plls, memory bw, shared links between pipes, ...) I don't think we
could even statically split up the modeset resurces, like your patch
would allow. Imho a better solution for the mutliseat use-case would
be to have a (priviledge) system-compositor that handles the resource
sharing between the different seats. Display servers would then simply
be yet another render node client (and doing modeset changes through a
protocol to the system compositor). The system-compositor could be
very well something that would aweful closely resemble wayland ;-)


Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list