[PATCH v2 12/26] drm/exynos: Split manager/display/subdrv

Tomasz Figa tomasz.figa at gmail.com
Tue Oct 29 21:50:55 CET 2013


Hi Sean,

On Tuesday 29 of October 2013 16:36:47 Sean Paul wrote:
> On Mon, Oct 28, 2013 at 7:13 PM, Tomasz Figa <tomasz.figa at gmail.com> 
wrote:
> > Hi,
> > 
> > On Wednesday 23 of October 2013 12:09:06 Sean Paul wrote:
> >> On Wed, Oct 23, 2013 at 11:53 AM, Dave Airlie <airlied at gmail.com> 
wrote:
> >> >>>>> I think we need to start considering a framework where
> >> >>>>> subdrivers
> >> >>>>> just
> >> >>>>> add drm objects themselves, then the toplevel node is
> >> >>>>> responsible
> >> >>>>> for
> >> >>>>> knowing that everything for the current configuration is
> >> >>>>> loaded.
> >> >>>> 
> >> >>>> It would be nice to specify the various pieces in dt, then have
> >> >>>> some
> >> >>>> type of drm notifier to the toplevel node when everything has
> >> >>>> been
> >> >>>> probed. Doing it in the dt would allow standalone
> >> >>>> drm_bridge/drm_panel
> >> >>>> drivers to be transparent as far as the device's drm driver is
> >> >>>> concerned.
> >> >>>> 
> >> >>>> Sean
> >> >>>> 
> >> >>>>> I realise we may need to make changes to the core drm to allow
> >> >>>>> this
> >> >>>>> but we should probably start to create a strategy for fixing
> >> >>>>> the
> >> >>>>> API
> >> >>>>> issues that this throws up.
> >> >>>>> 
> >> >>>>> Note I'm not yet advocating for dynamic addition of nodes once
> >> >>>>> the
> >> >>>>> device is in use, or removing them.
> >> >>> 
> >> >>> I do wonder if we had some sort of tag in the device tree for any
> >> >>> nodes
> >> >>> involved in the display, and the core drm layer would read that
> >> >>> list,
> >> >>> and when every driver registers tick things off, and when the
> >> >>> last
> >> >>> one
> >> >>> joins we get a callback and init the drm layer, we'd of course
> >> >>> have
> >> >>> the
> >> >>> basic drm layer setup prior to that so we can add the objects as
> >> >>> the
> >> >>> drivers load. It might make development a bit trickier as you'd
> >> >>> need
> >> >>> to make sure someone claimed ownership of all the bits for init
> >> >>> to
> >> >>> proceed.>>
> >> >> 
> >> >> Yeah, that's basically what the strawman looked like in my head.
> >> >> 
> >> >> Instead of a property in each node, I was thinking of having a
> >> >> separate gfx pipe nodes that would have dt pointers to the various
> >> >> pieces involved in that pipe. This would allow us to associate
> >> >> standalone entities like bridges and panels with encoders in dt
> >> >> w/o
> >> >> doing it in the drm code. I *think* this should be Ok with the dt
> >> >> guys
> >> >> since it is still describing the hardware, but I think we'd have
> >> >> to
> >> >> make sure it wasn't drm-specific.
> >> > 
> >> > I suppose the question is how much dynamic pipeline construction
> >> > there
> >> > is,
> >> > 
> >> > even on things like radeon and i915 we have dynamic clock generator
> >> > to
> >> > crtc to encoder setups, so I worry about static lists per-pipe, so
> >> > I
> >> > still think just stating all these devices are needed for display
> >> > and
> >> > a list of valid interconnections between them, then we can have the
> >> > generic code model drm crtc/encoders/connectors on that list, and
> >> > construct the possible_crtcs /possible_clones etc at that stage.
> >> 
> >> I'm, without excuse, hopeless at devicetree, so there are probably
> >> some violations, but something like:
> >> 
> >> display-pipelines {
> >> 
> >>   required-elements = <&bridge-a &panel-a &encoder-x &encoder-y
> >> 
> >> &crtc-x &crtc-y>;
> >> 
> >>   pipe1 {
> >>   
> >>     bridge = <&bridge-a>;
> >>     encoder = <&encoder-x>;
> >>     crtc = <&crtc-y>;
> >>   
> >>   };
> >>   pipe2 {
> >>   
> >>     encoder = <&encoder-x>;
> >>     crtc = <&crtc-x>;
> >>   
> >>   };
> >>   pipe3 {
> >>   
> >>     panel = <&panel-a>;
> >>     encoder = <&encoder-y>;
> >>     crtc = <&crtc-y>;
> >>   
> >>   };
> >> 
> >> };
> >> 
> >> I'm tempted to add connector to the pipe nodes as well, so it's
> >> obvious which connector should be used in cases where multiple
> >> entities in the pipe implement drm_connector. However, I'm not sure
> >> if
> >> that would be NACKed by dt people.
> >> 
> >> I'm also not sure if there are too many combinations for i915 and
> >> radeon to make this unreasonable. I suppose those devices could just
> >> use required-elements and leave the pipe nodes out.
> > 
> > Just to put my two cents in, as one of the people involved into "the
> > device tree movement", I'd say that instead of creating artifical
> > entities, such as display-pipelines and all of the pipeX'es, device
> > tree should represent relations between nodes.
> > 
> > According to the generic DT bindings we already have for
> > video-interfaces
> > [1] your example connection layout would look as follows:
> Hi Tomasz
> Thanks for sending this along.
> 
> I think the general consensus is that each drm driver should be
> implemented as a singular driver. That is, N:1 binding to driver
> mapping, where there are N IP blocks. Optional devices (such as
> bridges, panels) probably make sense to spin off as standalone
> drivers.

I believe this is a huge step backwards from current kernel design
standards, which prefer modularity.

Having multiple IPs being part of the DRM subsystem in a SoC, it would be 
nice to have the possibility to compile just a subset of support for them 
into the kernel and load rest of them as modules. (e.g. basic LCD 
controller on a mobile phone compiled in and external connectors, like 
HDMI as modules)

Not even saying that from development perspective, a huge single driver 
would be much more difficult to test and debug, than several smaller 
drivers, which could be developed separately.

Unless there is a misunderstanding here, I think this is broken.

> An example: exynos_drm_drv would be a platform_driver which implements
> drm_driver. On drm_load, it would enumerate the various dt nodes for
> its IP blocks and initialize them with direct calls (like
> exynos_drm_fimd_initialize). If the board uses a bridge (say for
> eDP->LVDS), that bridge driver would be a real driver with its own
> probe.
> 
> I think the ideal situation would be for the drm layer to manage the
> standalone drivers in a way that is transparent to the main driver,
> such that it doesn't need to know which type of hardware can hang off
> it. It will need to know if one exists since it might need to forego
> creating a connector, but it need not know anything else about it.
> 
> To accomplish this, I think we need:
>  (1) Some way for drm to enumerate the standalone drivers, so it can
> know when all of them have been probed
>  (2) A drm registration function that's called by the standalone
> drivers once they're probed, and a hook with drm_device pointer called
> during drm_load for them to register their drm_* implementations
>  (3) Something that will allow for deferred probe if the main driver
> kicks off before the standalones are in, it would need to be called
> before drm_platform/pci_init
> 
> I think we'll need to expand on the media bindings to achieve (1).

Could you elaborate on why you think so?

I believe the video interface bindings contain everything needed for this 
case, except, of course, some device/bus specific parts, but those are to 
be defined by separate device/bus specific bindings.

Best regards,
Tomasz



More information about the dri-devel mailing list