[PATCH 02/12] drm/etnaviv: add devicetree bindings

Daniel Vetter daniel at ffwll.ch
Sat Dec 5 02:12:08 PST 2015


On Fri, Dec 04, 2015 at 05:43:33PM -0500, Ilia Mirkin wrote:
> On Fri, Dec 4, 2015 at 5:05 PM, Russell King - ARM Linux
> <linux at arm.linux.org.uk> wrote:
> > On Fri, Dec 04, 2015 at 03:42:47PM -0500, Ilia Mirkin wrote:
> >> On Fri, Dec 4, 2015 at 3:31 PM, Russell King - ARM Linux
> >> <linux at arm.linux.org.uk> wrote:
> >> > Moreover, DRI3 is not yet available for Gallium, so if we're talking
> >> > about Xorg, then functional DRI2 is a requirement, and that _needs_
> >> > to have a single device for the rendering instances.  Xorg has no way
> >> > to pass multiple render nodes to client over DRI2.
> >>
> >> Just to correct... DRI3 has been available on gallium [at least in the
> >> context of st/mesa] basically since DRI3 was introduced. Not sure what
> >> issue you're fighting with, but it's definitely not a gallium
> >> limitation... could be something related to platform devices.
> >
> > Well, my statement is based on the fact that there's nothing in
> > src/gallium/state-tracker/dri which hints at being DRI3.  Maybe it's
> > implemented differently, I don't know.  I never wanted to hack MESA -
> > I'm supposed to be the ARM kernel maintainer - and I'm still very new
> > to MESA.
> >
> > I think it's a DRI3 limitation.  The issue with the DRI3 design is that:
> >
> > * The client has access to the GPU render nodes only, but not the
> >   corresponding KMS node.
> > * Buffers in DRI3 are allocated from the GPU render nodes.
> > * The Xorg Present protocol is then used to manage the vblank
> >   synchonisation and page flips.
> >
> > Now, the KMS scanout hardware typically does not support any kind of
> > scatter-gather: the buffers it has must be contiguous.  These can be
> > allocated from the KMS DRM device.
> >
> > However, the DRI3 client has no access to the KMS DRM device to allocate
> > linear buffers from, and GPUs typically don't have dumb buffer support.
> > Hence, the client can't pass a suitable buffer to the present layer.

Oh right, buffer alloc if you have special constraints won't work with
DRI3 as-is. For that we probably need something like DRI2 for buffer alloc
+ Present (like DRI3 does) for flipping them.

> > Hence, I can see no way for the resource_create() to be able to allocate
> > any kind of scanout capable buffer.
> >
> > That's a separate issue though: you've pointed out that you can select
> > which render node to use: what if we want to use multiple render nodes
> > simultaneously - eg, because we want to use multiple 3D GPU cores
> > together?  How does that work with stuff?
> 
> This is a bit like the SLI question -- let's say you have 2 pricey
> desktop GPUs, with a fast interconnect between them, which lets them
> know about each other, how do you make use of that. Solution: unsolved
> :)
> 
> In an ideal world, you'd have a single driver that knows how to
> interact with multiple devices and make them do cool things. However
> this a completely untrodden path. (Not to mention the problem of *how*
> to break up a workload across 2 GPUs.)
> 
> >
> > I think the idea that individual GPU cores should be exposed as
> > separate DRM devices is fundamentally flawed, and adds a hell of a
> > lot of complexity.
> >
> > In any case, I've spent _way_ too much time on etnaviv during November -
> > quite literally almost every day (I worked out that I was producing 4.5
> > patches a day during November for Etnaviv MESA.)  I'm afraid that it's
> > now time that I switch my attention elsewhere, so if this means that
> > Etnaviv is rejected for merging, I'm afraid it'll be a few months before
> > I can get back to it.
> >
> > It would have been nice if these issues had been brought up much earlier,
> > during the RFC posting of the patches.  These are nothing new, and I now
> > feel that this whole thing has been a total waste of time.
> 
> The whole multi-gpu thing is a bit of an open question right now. It
> works in theory, but in practice nobody's done it. Maarten tried to
> get nouveau/gk20a + tegra/drm on Jetson TK1 to play nice with, e.g., X
> 2d accel, and it was an exercise in pain. Not sure that he ever
> succeeded.
> 
> I think it's unfair to try to make new hardware enablement the time to
> attempt to heap extra work onto those authors unfortunate enough to
> have slightly unorthodox hardware that maps nicely onto some
> desired-but-not-quite-there-yet usage model -- they have enough
> problems.
>
> The way everything works is one drm node can do everything. PRIME is a
> barely-functional way to offload some things onto a (single) secondary
> GPU. Everything beyond that is purely theoretical.

One thing that doesn't yet work with PRIME is synchronization. But at list
for the render->display combo Alex Goins from nvidia fixed it up, and it's
pretty much trivial to do so (2 patches on i915 since we needed to support
both atomic and legacy page_flip, would have been just 1 tiny patch
otherwise). i915->nvidia would have been more work because of some locking
fun, but the infrastructure is now all there (including patches for Xorg).

> None of what's being done now prevents some future where these things
> are broken up. No need to force it now.

See my other mail, but intel implemented the 2 drm drivers, one logical
thing model. Unfortunately didn't go upstream, but definitely possible.

But the real question is whether it makes sense to split drivers, and imo
for that question the only criteria should be:
- How different are the blocks? When there's a major architectural shift
  across everything from a vendor then splitting makes sense. Looking at
  etnaviv it seems like there's at least somewhat a unified frontend on
  top of these different blocks, so makes sense to keep them together in
  one driver.

- How likely is the IP block to be reused in a totally different SoC/GPU?
  That's why we've done the split in intel for libva since that additional
  core was a licensed IP core also used it other SoCs. Since the proposed
  etnaviv driver can already work together with 2 different display
  drivers we seem to have sufficient flexibility here. So if someone would
  combine e.g. etnaviv 2d core with some other 3d core it should pretty
  much just work. Well there's some fun maybe on the userspace side to
  glue things together, but not on the kernel.

Given that I think the current etnaviv is a sound architecture. And I'm
not saying that because drm requires everything to be smashed into one
driver, since that's simple not the case.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list