[PATCH RFC 102/111] staging: etnaviv: separate GPU pipes from execution state

Lucas Stach l.stach at pengutronix.de
Tue Apr 7 08:09:08 PDT 2015


Am Dienstag, den 07.04.2015, 16:52 +0200 schrieb Christian Gmeiner:
> 2015-04-07 16:38 GMT+02:00 Alex Deucher <alexdeucher at gmail.com>:
> > On Tue, Apr 7, 2015 at 3:46 AM, Lucas Stach <l.stach at pengutronix.de> wrote:
> >> Am Sonntag, den 05.04.2015, 21:41 +0200 schrieb Christian Gmeiner:
> >>> 2015-04-02 18:37 GMT+02:00 Russell King - ARM Linux <linux at arm.linux.org.uk>:
> >>> > On Thu, Apr 02, 2015 at 05:30:44PM +0200, Lucas Stach wrote:
> >>> >> While this isn't the case on i.MX6 a single GPU pipe can have
> >>> >> multiple rendering backend states, which can be selected by the
> >>> >> pipe switch command, so there is no strict mapping between the
> >>> >> user "pipes" and the PIPE_2D/PIPE_3D execution states.
> >>> >
> >>> > This is good, because on Dove we have a single Vivante core which
> >>> > supports both 2D and 3D together.  It's always bugged me that
> >>> > etnadrm has not treated cores separately from their capabilities.
> >>> >
> >>>
> >>> Today I finally got the idea how this multiple pipe stuff should be
> >>> done the right way - thanks Russell.
> >>> So maybe you/we need to rework how the driver is designed regarding
> >>> cores and pipes.
> >>>
> >>> On the imx6 we should get 3 device nodes each only supporting one pipe
> >>> type. On the dove we
> >>> should get only one device node supporting 2 pipes types. What do you think?
> >>>
> >> Sorry, but I strongly object against the idea of having multiple DRM
> >> device nodes for the different pipes.
> >>
> >> If we need the GPU2D and GPU3D to work together (and I can already see
> >> use-cases where we need to use the GPU2D in MESA to do things the GPU3D
> >> is incapable of) we would then need a lot more DMA-BUFs to get buffers
> >> across the devices. This is a waste of resources and complicates things
> >> a lot as we would then have to deal with DMA-BUF fences just to get the
> >> synchronization right, which is a no-brainer if we are on the same DRM
> >> device.
> >>
> >> Also it does not allow us to make any simplifications to the userspace
> >> API, so I can't really see any benefit.
> >>
> >> Also on Dove I think one would expect to get a single pipe capable of
> >> executing in both 2D and 3D state. If userspace takes advantage of that
> >> one could leave the sync between both engines to the FE, which is a good
> >> thing as this allows the kernel to do less work. I don't see why we
> >> should throw this away.
> >
> > Just about all modern GPUs support varying combinations of independent
> > pipelines and we currently support this just fine via a single device
> > node in other drm drivers.  E.g., modern radeons support one or more
> > gfx, compute, dma, video decode and video encode engines.  What
> > combination is present depends on the asic.
> >
> 
> So if you have multiple GPUs (IP cores with separate IRQ, register
> addresses, ..) with
> combinations of independent pipelines that would mean that every GPU
> gets its own
> device node and supports a combinations of independent pipelines.
> 
To merge the available GPU cores on one SoC into a single DRM device or
to construct an separate DRM device for each core is purely an
implementation decision.
For now I haven't seen any compelling argument that having separate DRM
devices would provide any benefit.

Regards,
Lucas

-- 
Pengutronix e.K.             | Lucas Stach                 |
Industrial Linux Solutions   | http://www.pengutronix.de/  |



More information about the dri-devel mailing list