[PATCHv3 06/30] drm/omap: Add support for render nodes

Laurent Pinchart laurent.pinchart at ideasonboard.com
Wed Mar 29 21:42:22 UTC 2017


Hi David,

On Wednesday 29 Mar 2017 14:51:48 David Herrmann wrote:
> On Wed, Mar 29, 2017 at 2:20 PM, Laurent Pinchart wrote:
> > On Wednesday 29 Mar 2017 11:58:23 Tomi Valkeinen wrote:
> >> On 29/03/17 11:22, Laurent Pinchart wrote:
> >>> On Tuesday 28 Mar 2017 16:07:52 Tomi Valkeinen wrote:
> >>>> From: Hemant Hariyani <hemanthariyani at ti.com>
> >>>> 
> >>>> Add support for render nodes in omap driver and allow required
> >>>> ioctls to be accessible via render nodes.
> >>> 
> >>> But the OMAP DSS doesn't perform rendering... This seems an abuse of
> >>> render nodes, I think the API should instead be implemented by the GPU
> >>> driver.
> >> 
> >> I agree that the GPU use case described in the patch sounds a bit odd.
> >> Why not allocate from the GPU driver instead. But here a particular
> >> issue is that to get TILER buffers we need to ask them from the omapdrm.
> >> Probably TILER should not be part of omapdrm, but then, where should it
> >> be...
> >> 
> >> We also have writeback in DSS, which can function as a memory to memory
> >> or capture device. That's not supported in the mainline, but we have
> >> support in the TI kernel.
> >> 
> >> And how about a case where you have the privileged process as KMS
> >> master, and another process wants to draw to a buffer with the CPU, and
> >> then give the buffer to the privileged process for displaying.
> >> 
> >> So, yes, DSS is not a renderer (well, WB is kind of rendering), but
> >> isn't it a valid use case to allocate a buffer from omapdrm?
> > 
> > It could be a valid use case, but it's still an API abuse. It starts
> > sounding like a DRM core issue to me. The DRIVER_RENDER flag is not
> > documented, so its exact meaning isn't defined. I thought it was supposed
> > to flag the device as a renderer (GPU).
> > 
> > commit 1793126fcebd7c18834f95d43b55e387a8803aa8
> > Author: David Herrmann <dh.herrmann at gmail.com>
> > Date:   Sun Aug 25 18:29:00 2013 +0200
> > 
> >     drm: implement experimental render nodes
> >     
> >     Render nodes provide an API for userspace to use non-privileged GPU
> >     commands without any running DRM-Master. It is useful for offscreen
> >     rendering, GPGPU clients, and normal render clients which do not
> >     perform
> >     modesetting.
> >     
> >     [...]
> > 
> > You instead use the flag as a way to enable unprivileged clients to
> > allocate buffers. If that's what the flag should mean, I wonder if there
> > would be a use case for *not* setting it.
> 
> Correct. You can (and should) enable all your sandboxed commands on
> render nodes. That is, if a command only affects the issuing client,
> then it is safe for render-nodes. If two clients have a file-context
> opened on the render node, they should be unable to affect each other
> (minus accounting, resource allocation, etc.).
> 
> The name is historic (I did not come up with it either, but failed at
> renaming it..). The DRIVER_RENDER flag only controls whether the
> render-node is actually instantiated. I will not object doing that
> unconditionally. It will create some useless nodes for legacy drivers,
> but we should not care.

Couldn't we achieve the same effect without render nodes, by allowing GEM 
object allocation on the main DRM node by unauthenticated clients ?

-- 
Regards,

Laurent Pinchart



More information about the dri-devel mailing list