[PATCH v2 12/26] drm/exynos: Split manager/display/subdrv

Thierry Reding thierry.reding at gmail.com
Mon Nov 4 02:36:03 PST 2013


On Wed, Oct 30, 2013 at 11:32:24AM -0400, Sean Paul wrote:
> On Tue, Oct 29, 2013 at 4:50 PM, Tomasz Figa <tomasz.figa at gmail.com> wrote:
> > Hi Sean,
> >
> > On Tuesday 29 of October 2013 16:36:47 Sean Paul wrote:
> >> On Mon, Oct 28, 2013 at 7:13 PM, Tomasz Figa <tomasz.figa at gmail.com>
> > wrote:
> >> > Hi,
> >> >
> >> > On Wednesday 23 of October 2013 12:09:06 Sean Paul wrote:
> >> >> On Wed, Oct 23, 2013 at 11:53 AM, Dave Airlie <airlied at gmail.com>
> > wrote:
> >> >> >>>>> I think we need to start considering a framework where
> >> >> >>>>> subdrivers
> >> >> >>>>> just
> >> >> >>>>> add drm objects themselves, then the toplevel node is
> >> >> >>>>> responsible
> >> >> >>>>> for
> >> >> >>>>> knowing that everything for the current configuration is
> >> >> >>>>> loaded.
> >> >> >>>>
> >> >> >>>> It would be nice to specify the various pieces in dt, then have
> >> >> >>>> some
> >> >> >>>> type of drm notifier to the toplevel node when everything has
> >> >> >>>> been
> >> >> >>>> probed. Doing it in the dt would allow standalone
> >> >> >>>> drm_bridge/drm_panel
> >> >> >>>> drivers to be transparent as far as the device's drm driver is
> >> >> >>>> concerned.
> >> >> >>>>
> >> >> >>>> Sean
> >> >> >>>>
> >> >> >>>>> I realise we may need to make changes to the core drm to allow
> >> >> >>>>> this
> >> >> >>>>> but we should probably start to create a strategy for fixing
> >> >> >>>>> the
> >> >> >>>>> API
> >> >> >>>>> issues that this throws up.
> >> >> >>>>>
> >> >> >>>>> Note I'm not yet advocating for dynamic addition of nodes once
> >> >> >>>>> the
> >> >> >>>>> device is in use, or removing them.
> >> >> >>>
> >> >> >>> I do wonder if we had some sort of tag in the device tree for any
> >> >> >>> nodes
> >> >> >>> involved in the display, and the core drm layer would read that
> >> >> >>> list,
> >> >> >>> and when every driver registers tick things off, and when the
> >> >> >>> last
> >> >> >>> one
> >> >> >>> joins we get a callback and init the drm layer, we'd of course
> >> >> >>> have
> >> >> >>> the
> >> >> >>> basic drm layer setup prior to that so we can add the objects as
> >> >> >>> the
> >> >> >>> drivers load. It might make development a bit trickier as you'd
> >> >> >>> need
> >> >> >>> to make sure someone claimed ownership of all the bits for init
> >> >> >>> to
> >> >> >>> proceed.>>
> >> >> >>
> >> >> >> Yeah, that's basically what the strawman looked like in my head.
> >> >> >>
> >> >> >> Instead of a property in each node, I was thinking of having a
> >> >> >> separate gfx pipe nodes that would have dt pointers to the various
> >> >> >> pieces involved in that pipe. This would allow us to associate
> >> >> >> standalone entities like bridges and panels with encoders in dt
> >> >> >> w/o
> >> >> >> doing it in the drm code. I *think* this should be Ok with the dt
> >> >> >> guys
> >> >> >> since it is still describing the hardware, but I think we'd have
> >> >> >> to
> >> >> >> make sure it wasn't drm-specific.
> >> >> >
> >> >> > I suppose the question is how much dynamic pipeline construction
> >> >> > there
> >> >> > is,
> >> >> >
> >> >> > even on things like radeon and i915 we have dynamic clock generator
> >> >> > to
> >> >> > crtc to encoder setups, so I worry about static lists per-pipe, so
> >> >> > I
> >> >> > still think just stating all these devices are needed for display
> >> >> > and
> >> >> > a list of valid interconnections between them, then we can have the
> >> >> > generic code model drm crtc/encoders/connectors on that list, and
> >> >> > construct the possible_crtcs /possible_clones etc at that stage.
> >> >>
> >> >> I'm, without excuse, hopeless at devicetree, so there are probably
> >> >> some violations, but something like:
> >> >>
> >> >> display-pipelines {
> >> >>
> >> >>   required-elements = <&bridge-a &panel-a &encoder-x &encoder-y
> >> >>
> >> >> &crtc-x &crtc-y>;
> >> >>
> >> >>   pipe1 {
> >> >>
> >> >>     bridge = <&bridge-a>;
> >> >>     encoder = <&encoder-x>;
> >> >>     crtc = <&crtc-y>;
> >> >>
> >> >>   };
> >> >>   pipe2 {
> >> >>
> >> >>     encoder = <&encoder-x>;
> >> >>     crtc = <&crtc-x>;
> >> >>
> >> >>   };
> >> >>   pipe3 {
> >> >>
> >> >>     panel = <&panel-a>;
> >> >>     encoder = <&encoder-y>;
> >> >>     crtc = <&crtc-y>;
> >> >>
> >> >>   };
> >> >>
> >> >> };
> >> >>
> >> >> I'm tempted to add connector to the pipe nodes as well, so it's
> >> >> obvious which connector should be used in cases where multiple
> >> >> entities in the pipe implement drm_connector. However, I'm not sure
> >> >> if
> >> >> that would be NACKed by dt people.
> >> >>
> >> >> I'm also not sure if there are too many combinations for i915 and
> >> >> radeon to make this unreasonable. I suppose those devices could just
> >> >> use required-elements and leave the pipe nodes out.
> >> >
> >> > Just to put my two cents in, as one of the people involved into "the
> >> > device tree movement", I'd say that instead of creating artifical
> >> > entities, such as display-pipelines and all of the pipeX'es, device
> >> > tree should represent relations between nodes.
> >> >
> >> > According to the generic DT bindings we already have for
> >> > video-interfaces
> >> > [1] your example connection layout would look as follows:
> >> Hi Tomasz
> >> Thanks for sending this along.
> >>
> >> I think the general consensus is that each drm driver should be
> >> implemented as a singular driver. That is, N:1 binding to driver
> >> mapping, where there are N IP blocks. Optional devices (such as
> >> bridges, panels) probably make sense to spin off as standalone
> >> drivers.
> >
> > I believe this is a huge step backwards from current kernel design
> > standards, which prefer modularity.
> >
> > Having multiple IPs being part of the DRM subsystem in a SoC, it would be
> > nice to have the possibility to compile just a subset of support for them
> > into the kernel and load rest of them as modules. (e.g. basic LCD
> > controller on a mobile phone compiled in and external connectors, like
> > HDMI as modules)
> >
> > Not even saying that from development perspective, a huge single driver
> > would be much more difficult to test and debug, than several smaller
> > drivers, which could be developed separately.
> >
> > Unless there is a misunderstanding here, I think this is broken.
> >
> 
> I'll defer to Stéphane's answer here. In theory it sounds good, but
> things get messy in practice.
> 
> >> An example: exynos_drm_drv would be a platform_driver which implements
> >> drm_driver. On drm_load, it would enumerate the various dt nodes for
> >> its IP blocks and initialize them with direct calls (like
> >> exynos_drm_fimd_initialize). If the board uses a bridge (say for
> >> eDP->LVDS), that bridge driver would be a real driver with its own
> >> probe.
> >>
> >> I think the ideal situation would be for the drm layer to manage the
> >> standalone drivers in a way that is transparent to the main driver,
> >> such that it doesn't need to know which type of hardware can hang off
> >> it. It will need to know if one exists since it might need to forego
> >> creating a connector, but it need not know anything else about it.
> >>
> >> To accomplish this, I think we need:
> >>  (1) Some way for drm to enumerate the standalone drivers, so it can
> >> know when all of them have been probed
> >>  (2) A drm registration function that's called by the standalone
> >> drivers once they're probed, and a hook with drm_device pointer called
> >> during drm_load for them to register their drm_* implementations
> >>  (3) Something that will allow for deferred probe if the main driver
> >> kicks off before the standalones are in, it would need to be called
> >> before drm_platform/pci_init
> >>
> >> I think we'll need to expand on the media bindings to achieve (1).
> >
> > Could you elaborate on why you think so?
> >
> > I believe the video interface bindings contain everything needed for this
> > case, except, of course, some device/bus specific parts, but those are to
> > be defined by separate device/bus specific bindings.
> >
> 
> AFAICT, there is no way for drm to enumerate all of the pieces that
> need probing before it loads (ie: how do you enumerate all device
> nodes with pipe {} subnode[s]). I've given this more thought, and I
> think the following could work without forcing unified/split drivers
> (ie: it can be left to the driver author to choose).
> 
> If there was some way for drm to know all of the pieces that need to
> be probed/initialized before calling drm_load, it could provide an API
> for various drivers to "claim" nodes. This API would accept the
> device_node being claimed as well as an initialize hook that will be
> called back to give the standalone driver a pointer to the drm_device.
> 
> The main drm driver, which is responsible for calling
> drm_platform/pci_init, would claim the nodes it plans on implementing
> in the probe. It would then check drm to see if all requred nodes had
> been claimed. If they have not been claimed, that probe would defer
> and try again later.
> 
> Once all required nodes have been "claimed", the main driver's probe
> would call drm_platform/pci_init to kick off load(). After load() has
> finished, the drm layer would then call the various standalone driver
> hooks that were previously registered when it claimed its node. These
> hooks would allow the driver to register its
> crtc/encoder/bridge/connector.
> 
> Multi-driver solutions could work within this framework, as could
> integrated ones. This would also allow things like bridge drivers to
> be completely transparent.
> 
> I hope that made sense ;)

I'll just go and repeat myself in the hope to increase chances of
someone reading it: I recommend looking at the Tegra DRM driver which
solves a lot of these issues already (in much the same way that you
suggest here).

The version in the tree that I've submitted for 3.13 (I think Dave
hasn't merged it yet) is improved in many ways. Unfortunately it isn't
quite as generic as I would've liked it to be and rather tied to how the
Tegra SoC is architected, but I've volunteered elsewhere to look into
further abstracting things away in order to turn it into something that
could even be used outside of DRM. I haven't received much feedback,
though, so I have close to no idea what the requirements for others are,
and hence it's difficult to know where to start.

In case anyone's interested, there's some code here:

	http://cgit.freedesktop.org/tegra/linux/log/?h=drm/for-next

More specifically:

	http://cgit.freedesktop.org/tegra/linux/tree/drivers/gpu/host1x/bus.c?h=drm/for-next
	http://cgit.freedesktop.org/tegra/linux/tree/drivers/gpu/drm/tegra/bus.c?h=drm/for-next
	http://cgit.freedesktop.org/tegra/linux/tree/drivers/gpu/drm/tegra/drm.c?h=drm/for-next

Thierry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20131104/ebdb0612/attachment-0001.pgp>


More information about the dri-devel mailing list