Overlay support in the i.MX7 display

Daniel Vetter daniel at ffwll.ch
Mon Nov 4 18:24:51 UTC 2019


On Mon, Nov 04, 2019 at 02:58:29PM +0200, Laurent Pinchart wrote:
> Hello,
> 
> On Mon, Nov 04, 2019 at 10:09:47AM +0200, Pekka Paalanen wrote:
> > On Sun, 03 Nov 2019 19:15:49 +0100 Stefan Agner wrote:
> > > On 2019-11-01 09:43, Laurent Pinchart wrote:
> > > > Hello,
> > > > 
> > > > I'm looking at the available options to support overlays in the display
> > > > pipeline of the i.MX7. The LCDIF itself unfortunaltey doesn't support
> > > > overlays, the feature being implemented in the PXP. A driver for the PXP
> > > > is available but only supports older SoCs whose PXP doesn't support
> > > > overlays. This driver is implemented as a V4L2 mem2mem driver, which
> > > > makes support of additional input channels impossible.  
> > > 
> > > Thanks for bringing this up, it is a topic I have wondered too:
> > > Interaction between PXP and mxsfb.
> > > 
> > > I am not very familiar with the V4L2 subsystem so take my opinions with
> > > a grain of salt.
> > > 
> > > > Here are the options I can envision:
> > > > 
> > > > - Extend the existing PXP driver to support multiple channels. This is
> > > >   technically feasible, but will require moving away from the V4L2
> > > >   mem2mem framework, which would break userspace. I don't think this
> > > >   path could lead anywhere.
> > > > 
> > > > - Write a new PXP driver for the i.MX7, still using V4L2, but with
> > > >   multiple video nodes. This would allow blending multiple layers, but
> > > >   would require writing the output to memory, while the PXP has support
> > > >   for direct connections to the LCDIF (through small SRAM buffers).
> > > >   Performances would thus be suboptimal. The API would also be awkward,
> > > >   as using the PXP for display would require usage of V4L2 in
> > > >   applications.  
> > > 
> > > So the video nodes would be sinks? I would expect overlays to be usable
> > > through KMS, I guess that would then not work, correct?
> 
> There would be sink video nodes for the PXP inputs, and one source video
> node for the PXP output. The PXP can be used stand-alone, in
> memory-to-memory mode, and V4L2 is a good fit for that.
> 
> > > > 
> > > > - Extend the mxsfb driver with PXP support, and expose the PXP inputs as
> > > >   KMS planes. The PXP would only be used when available, and would be
> > > >   transparent to applications. This would however prevent using it
> > > >   separately from the display (to perform multi-pass alpha blending for
> > > >   instance).  
> > > 
> > > KMS planes are well defined and are well integrated with the KMS API, so
> > > I prefer this option. But is this compatible with the currently
> > > supported video use-case? E.g. could we make PXP available through V4L2
> > > and through DRM/mxsfb?
> 
> That's the issue, it's not easily doable. I think we could do so, but
> how to ensure mutual exclusion between the two APIs needs to be
> researched. I fear it will result in an awkward solution with fuzzy
> semantics. A module parameter could be an option, but wouldn't be very
> flexible.
> 
> > > Not sure what your use case is exactly, but when playing a video I
> > > wonder where is the higher value using PXP: Color conversion and scaling
> > > or compositing...? I would expect higher value in the former use case.
> 
> I think it's highly use-case-dependent.
> 
> > mind, with Wayland architecture, color conversion and scaling could be
> > at the same level/step as compositing, in the display server instead of
> > an application. Hence if the PXP capabilities were advertised as KMS
> > planes, there should be nothing to patch in Wayland-designed
> > applications to make use of them, assuming the applications did not
> > already rely on V4L2 M2M devices.
> > 
> > Would it not be possible to expose PXP through both uAPI interfaces? At
> > least KMS atomic's TEST_ONLY feature would make it easy to say "no" to
> > userspace if another bit of userspace already reserved the device via
> > e.g. V4L2.
> 
> We would also need to figure out how to do it the other way around,
> reporting properly through V4L2 that the device is busy. I think it's
> feasible, but I doubt it would result in anything usable for userspace.
> If the KMS device exposes multiple planes unconditionally and fails the
> atomic commit if the PXP is used through V4L2, I think it would be hard
> for Wayland to use this consistently. Given that I expect the PXP to be
> mostly used for display purpose I'm tempted to allocate it for display
> unconditionally, or, possibly, decide how to expose it through a module
> parameter.

KMS should be fine if planes are missing, userspace is supposed to be able
to cope with that. Not all userspace does, but welp.
 
I figured the bigger issue will be on the v4l side, since "device
temporarily gone" is not something v4l understands as a concept?

But even then having one device for userspace would be best I think, just
a lot more reasonable (insert wish for unified video/display subsystem
here).

> We have a similar situation on Renesas R-Car Gen3 platforms, with a
> memory-to-memory compositor called VSP. Some VSP instances are connected
> to the display controller, and we allocate them for display
> unconditionally. Other VSP instances are exposed as V4L2 devices. We
> haven't heard of anyone who wanted to use the display VSP instances for
> unrelated purposes. If such a use case arose, exposing those instances
> through V4L2 would just be a matter of flipping one bit in the driver
> (all the infrastructure is in place), which we would likely expose as a
> module parameter.

Hm yeah I guess we could just assign them, if the use-cases are clear-cut
enough. Are you thinking of doing these links with dt (so at least it
would be patchable)?
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list