DRM_UDL and GPU under Xserver

Alexey Brodkin Alexey.Brodkin at synopsys.com
Mon Apr 9 08:55:36 UTC 2018


Hi Daniel,

On Mon, 2018-04-09 at 10:31 +0200, Daniel Vetter wrote:
> On Thu, Apr 05, 2018 at 06:39:41PM +0000, Alexey Brodkin wrote:
> > Hi Daniel, all,

[snip]

> > Ok it was quite some time ago so I forgot about that completely.
> > I really made one trivial change in xf86-video-armada:
> > ------------------------>8--------------------------
> > --- a/src/armada_module.c
> > +++ b/src/armada_module.c
> > @@ -26,7 +26,7 @@
> >  #define ARMADA_NAME            "armada"
> >  #define ARMADA_DRIVER_NAME     "armada"
> >  
> > -#define DRM_MODULE_NAMES       "armada-drm", "imx-drm"
> > +#define DRM_MODULE_NAMES       "armada-drm", "imx-drm", "udl"
> >  #define DRM_DEFAULT_BUS_ID     NULL
> > ------------------------>8--------------------------
> > 
> > Otherwise Xserver fails on start which is expected given "imx-drm" is intentionally removed.

Here I meant I explicitly disabled DRM_IMX in the kernel configuraion
so that it is not used in run-time.

> You need to keep imx-drm around. And then light up the udl display using
> prime. Afaiui it should all just work (but with maybe a few disconnected
> outputs from imx-drm around that you don't need, but that's not a
> problem).

And given my comment above I don't really see any difference between
DRM_IMX and DRM_UDL (except their HW implmentation which I guess should
not bother upper layers) so why do wee need to treat them differently?

Most probably I'm missing something but my thought was if we have
2 equally well supported KMS devices we may easily swap them and still
have resulting setup functional.

-Alexey


More information about the dri-devel mailing list