[PATCH 02/12] drm/etnaviv: add devicetree bindings

Lucas Stach l.stach at pengutronix.de
Sat Dec 5 03:26:19 PST 2015


Am Freitag, den 04.12.2015, 14:19 -0600 schrieb Rob Herring:
> On Fri, Dec 4, 2015 at 11:56 AM, Lucas Stach <l.stach at pengutronix.de>
> wrote:
> > Am Freitag, den 04.12.2015, 11:33 -0600 schrieb Rob Herring:
> > > On Fri, Dec 4, 2015 at 10:41 AM, Lucas Stach <l.stach at pengutronix
> > > .de> wrote:
> > > > Am Freitag, den 04.12.2015, 10:29 -0600 schrieb Rob Herring:
> > > > > On Fri, Dec 04, 2015 at 02:59:54PM +0100, Lucas Stach wrote:
> > > > > > Etnaviv follows the same priciple as imx-drm to have a
> > > > > > virtual
> > > > > > master device node to bind all the individual GPU cores
> > > > > > together
> > > > > > into one DRM device.
> > > > > > 
> > > > > > Signed-off-by: Lucas Stach <l.stach at pengutronix.de>
> > > > > > ---
> > > > > >  .../bindings/display/etnaviv/etnaviv-drm.txt       | 46
> > > > > > ++++++++++++++++++++++
> > > > > >  1 file changed, 46 insertions(+)
> > > > > >  create mode 100644
> > > > > > Documentation/devicetree/bindings/display/etnaviv/etnaviv-
> > > > > > drm.txt
> > > > > > 
> > > > > > diff --git
> > > > > > a/Documentation/devicetree/bindings/display/etnaviv/etnaviv
> > > > > > -drm.txt
> > > > > > b/Documentation/devicetree/bindings/display/etnaviv/etnaviv
> > > > > > -drm.txt
> > > > > > new file mode 100644
> > > > > > index 000000000000..19fde29dc1d7
> > > > > > --- /dev/null
> > > > > > +++
> > > > > > b/Documentation/devicetree/bindings/display/etnaviv/etnaviv
> > > > > > -drm.txt
> > > > > > @@ -0,0 +1,46 @@
> > > > > > +Etnaviv DRM master device
> > > > > > +================================
> > > > > > +
> > > > > > +The Etnaviv DRM master device is a virtual device needed
> > > > > > to list all
> > > > > > +Vivante GPU cores that comprise the GPU subsystem.
> > > > > > +
> > > > > > +Required properties:
> > > > > > +- compatible: Should be one of
> > > > > > +    "fsl,imx-gpu-subsystem"
> > > > > > +    "marvell,dove-gpu-subsystem"
> > > > > > +- cores: Should contain a list of phandles pointing to
> > > > > > Vivante GPU devices
> > > > > > +
> > > > > > +example:
> > > > > > +
> > > > > > +gpu-subsystem {
> > > > > > +   compatible = "fsl,imx-gpu-subsystem";
> > > > > > +   cores = <&gpu_2d>, <&gpu_3d>;
> > > > > > +};
> > > > > 
> > > > > Yeah, I'm not really a fan of doing this simply because DRM
> > > > > wants 1
> > > > > driver.
> > > > > 
> > > > I'm aware of that, but I don't see much value in kicking this
> > > > discussion
> > > > around for every DRM driver submission. This is the binding
> > > > that has
> > > > emerged from a lengthy discussion at KS 2013 in Edinburgh and
> > > > at least
> > > > allows us to standardize on _something_. Also ALSA does a
> > > > similar thing
> > > > to bind codecs and CPU interfaces together.
> > > 
> > > This case is quite different though I think. The ALSA case and
> > > other
> > > DRM cases are ones that have inter-dependencies between the
> > > blocks
> > > (e.g. some sort of h/w connection). What is the inter-dependency
> > > here?
> > > 
> > > Doing this way has also been found to be completely unnecessary
> > > and
> > > removed in recent DRM driver reviews. Admittedly, those are cases
> > > where one device can be the master of the others. For 2 parallel
> > > devices, I don't have an alternative other than question why they
> > > need
> > > to be a single driver.
> > > 
> > If you insist on doing things differently for this driver, we could
> > add
> > a pass at driver registration that scans through the DT, looking
> > for
> > nodes matching the GPU core compatible.
> 
> I've not insisted on anything. I'm only asking a question which
> didn't
> get answered. I'll ask another way. Why can't you have 2 instances of
> the same driver given they are only rendering nodes?
> 
> > I'm not sure if that makes things cleaner though and might bite us
> > later
> > on. Also I'm not sure if moving away from the binding scheme
> > already
> > established for other DRM drivers makes things better from a DT
> > perspective. Personally I would prefer DT binding consistency over
> > perfection for single drivers, segmenting the DT binding space.
> 
> This is the least of our issues in terms of consistency among
> drivers,
> but that is in fact what I'm pushing for. This is probably the first
> case of a render only driver (at least for DT). So this isn't a case
> of just follow what others are doing.
> 
> The h/w in this area can be quite different, so the DT bindings are
> going to reflect that to some extent. A virtual node makes sense in
> some cases, but for others it may not.
> 
I see where you are going here and I appreciate that this discussion
isn't a exercise in bikeshed, but based on technical facts.

So let me try to explain things from the other way around:
We made the decision to have a single DRM device for all the Vivante
GPU nodes in a system based on technical merits, not because DRM wants
us to do this, but because it has practical upsides for the
implementation of the driver.

1. It makes buffer and management and information sharing between the
cores that are likely to be used together vastly easier for the use-
cases seen today. Having one DRM device per core would be possible, but
would make things a lot harder implementation wise.

2. It will allow us to share resources such as the GPU page tables,
once we move to per-client address spaces, reducing the footprint of
memory that we need to allocate out of CMA.

3. It makes submit fencing look the same regardless of the core
configuration. There are configurations where a 2D and a 3D core are
sitting behind a shared frontend (Dove) and some where each engine has
it's own frontend (i.MX6). Having a single DRM driver allows us to make
both configurations look the same to userspace from a fencing
perspective.

There are probably some more arguments that have escaped the top of my
head right now. Regardless of how the DT bindings end up, we won't move
away from the single DRM device design.

So the question is: given the above, are you opposed to having a
virtual node in DT to describe this master device?
I already sketched up the alternative of having the master driver scan
the DT for matching GPU nodes at probe time and binding them together
into a single device. But given that we end up with one master device
anyways, do you really prefer this over the virtual node, which is a
working and proven solution to this exact problem?

Regards,
Lucas


More information about the dri-devel mailing list