[Mesa-dev] [PATCH 1/2] pl111: Rename the pl111 driver to "kmsro".

Rob Herring robh at kernel.org
Thu Jan 24 16:22:30 UTC 2019


On Thu, Jan 24, 2019 at 9:14 AM Emil Velikov <emil.l.velikov at gmail.com> wrote:
>
> Hi all,
>
> Fwiw I'm ok with the idea, as pointed out in 2/2 as-is this is a
> partial solution.
> Never the less is some solution for the problem we have.
>
> With that said the series is:
> Acked-by: Emil Velikov <emil.velikov at collabora.com>
>
> On Wed, 23 Jan 2019 at 23:42, Alyssa Rosenzweig <alyssa at rosenzweig.io> wrote:
> >
> > > I've started looking at the lima and panfrost drivers. The many
> > > combinations of Mali GPUs and DC isn't going to scale. The lima and
> > > panfrost trees can't even co-exist as both define a rockchip winsys
> > > which load different GPU drivers. The same will be true for meson,
> > > hisilicon, allwinner, etc. i.MX is about to be in the same boat
> > > needing to support both etnaviv and freedreno.
> >
> > As Rob stated, Mali being used by basically everyone at one point or
> > another has led to a nightmare in the winsys. I agree that dealing with
> > the loader can happen later, but honestly, just having the centralised
> > kmsro winsys (that all of pl111/rockchip/meson/sunxi/etc point to) that
> > tries all of vc4/v3d/panfrost/lima/etc would be a marked improvement on
> > the present situation.
> >
> > There are a lot of DRM drivers out there, sure, and it _is_ better to
> > handle something generically in the loader. But for the much more
> > immediate goal of letting both Lima and Panfrost coexist on
> > Rockchip/Meson, this is a good start.
>
> AFAICT for a comprehensive solution, that handles the above usescases,
> we would need:
>
>  - a form or drm driver name to kms_ro mapping
> Personally I'm leaning towards a drirc style file. Thus no patching or
> rebuilding of mesa is needed and no more symlinks.

That would be nice. Based on the discussion on patch 2, I'm not really
clear on where all the support for this needs to go. That's just my
lack of X11 details.

>  - a form of KMSRO to GPU device mapping
> Thus we can use that instead of the hardcoded vc4 in the proposed KMSRO.
> Ideally they would live alongside the previous mappings, to avoid
> patching/rebuilding.

The one other thing besides which gpu we have is whether we alloc
scanout buffers in the gpu or dc, but that could be a flag.

For now, I was working on a patch to just try each gpu with a series
of drmOpenwithType calls like this:

#if defined(GALLIUM_ETNAVIV)
   ro.gpu_fd = drmOpenWithType("etnaviv", NULL, DRM_NODE_RENDER);
   if (ro.gpu_fd >= 0) {
      ro.create_for_resource = renderonly_create_kms_dumb_buffer_for_resource,
      screen = etna_drm_screen_create_renderonly(&ro);
      if (!screen)
         close(ro.gpu_fd);

      return screen;
   }
#endif

I don't think we have any cases of 2 different embedded GPUs in one
system (but SoC vendors have done crazier things) and the number of
GPUs is not a huge set.

Also, if we require some config file to tell us what GPU, then we
still have to update that config file for each and every new system.
I'd rather see things work by default and we only need a config file
for the special cases.

Rob


More information about the mesa-dev mailing list