[PATCH v2 12/26] drm/exynos: Split manager/display/subdrv

Sean Paul seanpaul at chromium.org
Mon Nov 4 07:44:00 PST 2013


On Mon, Nov 4, 2013 at 6:30 AM, Inki Dae <inki.dae at samsung.com> wrote:
> 2013/11/4 Thierry Reding <thierry.reding at gmail.com>:
>> On Tue, Oct 29, 2013 at 08:46:03PM -0700, Stéphane Marchesin wrote:
>>> On Tue, Oct 29, 2013 at 1:50 PM, Tomasz Figa <tomasz.figa at gmail.com> wrote:
>>>
>>> > Hi Sean,
>>> >
>>> > On Tuesday 29 of October 2013 16:36:47 Sean Paul wrote:
>>> > > On Mon, Oct 28, 2013 at 7:13 PM, Tomasz Figa <tomasz.figa at gmail.com>
>>> > wrote:
>>> > > > Hi,
>>> > > >
>>> > > > On Wednesday 23 of October 2013 12:09:06 Sean Paul wrote:
>>> > > >> On Wed, Oct 23, 2013 at 11:53 AM, Dave Airlie <airlied at gmail.com>
>>> > wrote:
>>> > > >> >>>>> I think we need to start considering a framework where
>>> > > >> >>>>> subdrivers
>>> > > >> >>>>> just
>>> > > >> >>>>> add drm objects themselves, then the toplevel node is
>>> > > >> >>>>> responsible
>>> > > >> >>>>> for
>>> > > >> >>>>> knowing that everything for the current configuration is
>>> > > >> >>>>> loaded.
>>> > > >> >>>>
>>> > > >> >>>> It would be nice to specify the various pieces in dt, then have
>>> > > >> >>>> some
>>> > > >> >>>> type of drm notifier to the toplevel node when everything has
>>> > > >> >>>> been
>>> > > >> >>>> probed. Doing it in the dt would allow standalone
>>> > > >> >>>> drm_bridge/drm_panel
>>> > > >> >>>> drivers to be transparent as far as the device's drm driver is
>>> > > >> >>>> concerned.
>>> > > >> >>>>
>>> > > >> >>>> Sean
>>> > > >> >>>>
>>> > > >> >>>>> I realise we may need to make changes to the core drm to allow
>>> > > >> >>>>> this
>>> > > >> >>>>> but we should probably start to create a strategy for fixing
>>> > > >> >>>>> the
>>> > > >> >>>>> API
>>> > > >> >>>>> issues that this throws up.
>>> > > >> >>>>>
>>> > > >> >>>>> Note I'm not yet advocating for dynamic addition of nodes once
>>> > > >> >>>>> the
>>> > > >> >>>>> device is in use, or removing them.
>>> > > >> >>>
>>> > > >> >>> I do wonder if we had some sort of tag in the device tree for any
>>> > > >> >>> nodes
>>> > > >> >>> involved in the display, and the core drm layer would read that
>>> > > >> >>> list,
>>> > > >> >>> and when every driver registers tick things off, and when the
>>> > > >> >>> last
>>> > > >> >>> one
>>> > > >> >>> joins we get a callback and init the drm layer, we'd of course
>>> > > >> >>> have
>>> > > >> >>> the
>>> > > >> >>> basic drm layer setup prior to that so we can add the objects as
>>> > > >> >>> the
>>> > > >> >>> drivers load. It might make development a bit trickier as you'd
>>> > > >> >>> need
>>> > > >> >>> to make sure someone claimed ownership of all the bits for init
>>> > > >> >>> to
>>> > > >> >>> proceed.>>
>>> > > >> >>
>>> > > >> >> Yeah, that's basically what the strawman looked like in my head.
>>> > > >> >>
>>> > > >> >> Instead of a property in each node, I was thinking of having a
>>> > > >> >> separate gfx pipe nodes that would have dt pointers to the various
>>> > > >> >> pieces involved in that pipe. This would allow us to associate
>>> > > >> >> standalone entities like bridges and panels with encoders in dt
>>> > > >> >> w/o
>>> > > >> >> doing it in the drm code. I *think* this should be Ok with the dt
>>> > > >> >> guys
>>> > > >> >> since it is still describing the hardware, but I think we'd have
>>> > > >> >> to
>>> > > >> >> make sure it wasn't drm-specific.
>>> > > >> >
>>> > > >> > I suppose the question is how much dynamic pipeline construction
>>> > > >> > there
>>> > > >> > is,
>>> > > >> >
>>> > > >> > even on things like radeon and i915 we have dynamic clock generator
>>> > > >> > to
>>> > > >> > crtc to encoder setups, so I worry about static lists per-pipe, so
>>> > > >> > I
>>> > > >> > still think just stating all these devices are needed for display
>>> > > >> > and
>>> > > >> > a list of valid interconnections between them, then we can have the
>>> > > >> > generic code model drm crtc/encoders/connectors on that list, and
>>> > > >> > construct the possible_crtcs /possible_clones etc at that stage.
>>> > > >>
>>> > > >> I'm, without excuse, hopeless at devicetree, so there are probably
>>> > > >> some violations, but something like:
>>> > > >>
>>> > > >> display-pipelines {
>>> > > >>
>>> > > >>   required-elements = <&bridge-a &panel-a &encoder-x &encoder-y
>>> > > >>
>>> > > >> &crtc-x &crtc-y>;
>>> > > >>
>>> > > >>   pipe1 {
>>> > > >>
>>> > > >>     bridge = <&bridge-a>;
>>> > > >>     encoder = <&encoder-x>;
>>> > > >>     crtc = <&crtc-y>;
>>> > > >>
>>> > > >>   };
>>> > > >>   pipe2 {
>>> > > >>
>>> > > >>     encoder = <&encoder-x>;
>>> > > >>     crtc = <&crtc-x>;
>>> > > >>
>>> > > >>   };
>>> > > >>   pipe3 {
>>> > > >>
>>> > > >>     panel = <&panel-a>;
>>> > > >>     encoder = <&encoder-y>;
>>> > > >>     crtc = <&crtc-y>;
>>> > > >>
>>> > > >>   };
>>> > > >>
>>> > > >> };
>>> > > >>
>>> > > >> I'm tempted to add connector to the pipe nodes as well, so it's
>>> > > >> obvious which connector should be used in cases where multiple
>>> > > >> entities in the pipe implement drm_connector. However, I'm not sure
>>> > > >> if
>>> > > >> that would be NACKed by dt people.
>>> > > >>
>>> > > >> I'm also not sure if there are too many combinations for i915 and
>>> > > >> radeon to make this unreasonable. I suppose those devices could just
>>> > > >> use required-elements and leave the pipe nodes out.
>>> > > >
>>> > > > Just to put my two cents in, as one of the people involved into "the
>>> > > > device tree movement", I'd say that instead of creating artifical
>>> > > > entities, such as display-pipelines and all of the pipeX'es, device
>>> > > > tree should represent relations between nodes.
>>> > > >
>>> > > > According to the generic DT bindings we already have for
>>> > > > video-interfaces
>>> > > > [1] your example connection layout would look as follows:
>>> > > Hi Tomasz
>>> > > Thanks for sending this along.
>>> > >
>>> > > I think the general consensus is that each drm driver should be
>>> > > implemented as a singular driver. That is, N:1 binding to driver
>>> > > mapping, where there are N IP blocks. Optional devices (such as
>>> > > bridges, panels) probably make sense to spin off as standalone
>>> > > drivers.
>>> >
>>> > I believe this is a huge step backwards from current kernel design
>>> > standards, which prefer modularity.
>>> >
>>> > Having multiple IPs being part of the DRM subsystem in a SoC, it would be
>>> > nice to have the possibility to compile just a subset of support for them
>>> > into the kernel and load rest of them as modules. (e.g. basic LCD
>>> > controller on a mobile phone compiled in and external connectors, like
>>> > HDMI as modules)
>>> >
>>> > Not even saying that from development perspective, a huge single driver
>>> > would be much more difficult to test and debug, than several smaller
>>> > drivers, which could be developed separately.
>>> >
>>>
>>> This is the opposite of our experience, though. A series of small drivers
>>> like what's in drm/exynos can become really tedious/difficult to
>>> coordinate. If you have separate drivers, everything needs to be
>>> synchronized, but also has to account for potentially different loading
>>> order.
>>>
>>> It seems you're only thinking about the basic case, where you only support
>>> a single resolution, no dpms, no suspend to ram... But when you want full
>>> fledged functionality, then the issues I described become much more
>>> prevalent.
>>
>> I fail to see how this is relevant here. It's fairly clear that even if
>> a DRM driver is composed of more than a single platform driver, there's
>> still a single point of coordination (the DRM driver). How does that
>> have any impact on what features the driver can support? All of the
>> features will be exposed via DRM, whether you use multiple drivers or a
>> single monolithic one underneath is completely irrelevant.
>>
>
> +1
>
> I think a single drm driver - all sub drivers are controlled by dpms
> of top level - is definitely what we should go to but it's not clear
> that a single drm driver should necessary be a huge single driver yet.
> Even if we use one more sub drivers based on driver model, we can
> avoid the issues, the loading and power ordering issues.
>
> So I'd like to ask question to Google people. Are there really any
> cases that the loading and power ordering issues can be incurred in
> case that a drm driver uses one more sub drivers based on driver
> model?

I think it makes the code simpler, and more explicit. If you don't
support dynamic module loading, what's the point in creating modules?

To avoid the init and suspend/resume ordering issues, one must
register the subdrivers from the main drm driver, and you must not use
pm_ops. So why incur the extra complexity of modules when we only use
the probe and that could easily be converted to

#ifdef CONFIG_SOME_GFX_BLOCK
init_some_gfx_block()
#endif

Sean


> With the re-factoring patch from Sean, I think Exynos drm
> driver also has no these issues anymore even through Exynos drm driver
> uses one more sub drivers based on driver model. That is why it's not
> clear to me yet.
>
> Thanks,
> Inki Dae
>
>> Thierry
>>
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel


More information about the dri-devel mailing list