[PATCH] drm/vkms: Add a DRM render node to vkms

Simon Ser contact at emersion.fr
Sat Jan 7 10:45:03 UTC 2023


On Friday, January 6th, 2023 at 23:28, Tao Wu <lepton at google.com> wrote:

> On Fri, Jan 6, 2023 at 1:54 AM Daniel Vetter daniel at ffwll.ch wrote:
> 
> > On Thu, Jan 05, 2023 at 01:40:28PM -0800, Tao Wu(吴涛@Eng) wrote:
> > 
> > > Hi Daniel,
> > > 
> > > May I know what's the requirement for adding render node support to a
> > > "gpu"? Why we just export render node for every drm devices?
> > > I read document here
> > > https://www.kernel.org/doc/html/v4.8/gpu/drm-uapi.html#render-nodes
> > 
> > Thus far we've only done it when there's actual rendering capability,
> > which generally means at least some private ioctls.
> 
> Hi Daniel, it looks like vgem is exporting render node by default.
> Per my understanding, vgem provides some DRM API so users can play
> with graphic buffers. I am feeling it's natural have a v*** device
> which provide
> the surperset which vgem and vkms provides, so it sounds like it's
> natural add rendernode to vkms, or do the opposite, add kms related
> stuff to vgem. I still don't get the point: what kind of issue it
> could bring if we just
> add render node to vkms? If your point is, we don't do that for other
> kms only devices, then my question is, how about we just enable render
> node for every DRM driver? what could go wrong with this approach?

This is wrong for at least two reasons:

- A render node has a semantic value: it indicates whether a device has
  rendering capabilities. If we attach a render node to vkms, we lie
  because vkms has no such capability.
- This would regress user-space. wlroots would no longer accept to
  start with Pixman on vkms, because it detects a render node on the
  device.

I'd advise moving away from abusing DRM dumb buffers in Mesa.


More information about the dri-devel mailing list