[RFC 0/5] Generic panel framework
Tomi Valkeinen
tomi.valkeinen at ti.com
Wed Oct 31 07:20:42 PDT 2012
On 2012-10-31 15:13, Laurent Pinchart wrote:
>> OMAP SoC
>> ========
>>
>> So here's first the SoC specific display nodes. OMAP has a DSS (display
>> subsystem) block, which contains the following elements:
>>
>> - DISPC (display controller) reads the pixels from memory and outputs
>> them using specified video timings. DISPC has three outputs, LCD0, LCD1
>> and TV. These are SoC internal outputs, they do not go outside the SoC.
>>
>> - DPI gets its data from DISPC's LCD0, and outputs MIPI DPI (parallel
>> RBG)
>>
>> - Two independent DSI modules, which get their data from LCD0 or LCD1,
>> and output MIPI DSI (a serial two-way video bus)
>>
>> - HDMI, gets data from DISPC's TV output and outputs HDMI
>>
>> / {
>> ocp {
>> dss {
>> dispc {
>> dss-lcd0: output at 0 {
>> };
>>
>> dss-lcd1: output at 1 {
>> };
>>
>> dss-tv: output at 2 {
>> };
>> };
>>
>> dpi: dpi {
>> video-source = <&dss-lcd0>;
>> };
>>
>> dsi0: dsi at 0 {
>> video-source = <&dss-lcd0>;
>> };
>>
>> dsi1: dsi at 1 {
>> video-source = <&dss-lcd1>;
>> };
>>
>> hdmi: hdmi {
>> video-source = <&dss-tv>;
>> };
>> };
>> };
>> };
>>
>> I have defined all the relevant nodes, and video-source property is used
>> to point to the source for video data. I also define aliases for the SoC
>> outputs so that panels can use them.
>>
>> One thing to note is that the video sources for some of the blocks, like
>> DSI, are not hardcoded in the HW, so dsi0 could get its data from LCD0
>> or LCD1.
>
> What about the source that are hardwired in hardware ? Shouldn't those be
> hardcoded in the driver instead ?
Even if both the DSI and the DISPC are parts of OMAP DSS, and part of
the SoC, they are separate IPs. We should look at them the same way we'd
consider chips that are outside the SoC.
So things that are internal to a device can (and I think should) be
hardcoded in the driver, but integration details, the connections
between the IPs, etc, should be described in the DT data.
Then again, we do have (and need) a driver for the "dss" node in the
above DT data. This dss represents dss_core, a "glue" IP that contains
the rest of the DSS blocks and muxes and such. It could be argued that
this dss_core driver does indeed know the integration details, and thus
there's no need to represent them in the DT data.
However, I do think that we should represent the DISPC outputs with
generic display entities inside CPF, just like DSI and the panels. And
we do need to set the connections between these entities. So the
question is, do we get those connections from the DT data, or are they
hardcoded in the dss_core driver.
I don't currently have strong opinions on either direction. Both make
sense to me. But I think this is SoC driver implementation specific, and
the common panel framework doesn't need to force this to either
direction. Both should be possible from CPF's point of view.
>> However, I don't think they are usually changed during runtime, and the dss
>> driver cannot change them independently for sure (meaning that some upper
>> layer should tell it how to change the config). Thus I specify sane defaults
>> here, but the board dts files can of course override the video sources.
>
> I'm not sure whether default settings like those really belong to the DT. I'm
> no expert on that topic though.
I agree. But I also don't have a good solution how the driver would find
good choices for these settings...
>> Another thing to note is that we have more outputs from OMAP than we have
>> outputs from DISPC. This means that the same video source is used by
>> multiple sinks (LCD0 used by DPI and DSI0). DPI and DSI0 cannot be used at
>> the same time, obviously.
>
> It might not be really obvious, as I don't see what prevents DPI and DSI0 to
> be used at the same time :-) Do they share physical pins ?
I think they do, but even if they didn't, there's just one source for
two outputs. So if the SoC design is such that the only video source for
DPI is LCD0, and the only video source for DSI0 is LCD0, and presuming
you can't send the video data to both destinations, then only one of DPI
and DSI0 can be enabled at the same time.
Even if the LCD0 could send the pixel stream to both DPI and DSI0, it'd
be "interesting", because the original video timings and pixel clock are
programmed in the LCD0 output, and thus both DPI and DSI0 get the same
timings. If DPI would want to use some other mode, DSI would most likely
go nuts.
So my opinion is that we should only allow 1:1 connections between
sources and sinks. If a component has multiple outputs, and even if
those outputs give the exact same data, it should define multiple sources.
>> And third thing to note, DISPC node defines outputs explicitly, as it has
>> multiple outputs, whereas the external outputs do not as they have only one
>> output. Thus the node's output is implicitly the node itself. So, instead of
>> having:
>>
>> ds0: dsi at 0 {
>> video-source = <&dss-lcd0>;
>> dsi0-out0: output at 0 {
>> };
>> };
>>
>> I have:
>>
>> dsi0: dsi at 0 {
>> video-source = <&dss-lcd0>;
>> };
>
> What about defining the data sinks instead of the data sources ? I find it
> more logical for the DSS to get the panel it's connected to than the other way
> around.
Good question. We look at this from different angles. Is the DSS
connected to the panel, or is the panel connected to the DSS? ;)
I see this the same way than, say, regulators. We have a device and a
driver for it, and the driver wants to use a hw resource. If the
resource is a regulator, the DT data for the device will contain a
reference to the regulator. The driver will then get the regulator, and
use it to operate the device.
Similarly for the display devices. We have a driver for a, say, i2c
device. The DT data for that device will contain a reference to the
source for the video data. The driver for the device will then use this
reference to enable/disable/etc the video stream (i.e. operate the device).
The regulators don't even really know anything about the users of the
regulators. Somebody just "gets" a regulator and enables it. The same
should work for display entities also. There's no need for the video
source to know anything about the sink.
So looking at a chain like:
DISPC -> DPI -> Panel
I see the component on the right side using the component on its left.
And thus it makes sense to me to have references from right to left,
i.e. panel has reference to DPI, etc.
Also, it makes sense with DT data files. For example, for omap4 we'll
have omap4.dtsi, which describes common omap4 details. This file would
contain the omap dss nodes. Then we have omap4-panda.dts, which includes
omap4.dtsi. omap4-panda.dts contains data for the displays on this
board. Thus we have something like:
omap4.dtsi:
dss {
dsi0: dsi at 0 {
...
};
};
omap4-panda.dts:
/ {
panel {
video-source = <&dsi0>;
};
};
So it comes out quite nicely. If we had the references the other way
around, omap4-panda.dts would need to have the panel definition, and
also override the dsi0 node from omap4.dtsi, and set the sink to the panel.
I have to say we've had this mind-set with omapdss for a long time, so I
may be a bit blind to other alternative. Do you have any practical
examples how linking the other way around would be better?
>> Of this I'm a bit unsure. I believe in most cases there's only one output,
>> so it'd be nice to have a shorter representation, but I'm not sure if it's
>> good to handle the cases for single and multiple outputs differently. Or if
>> it's good to mix control and data busses, as, for example, dsi0 can be used
>> as both control and data bus. Having the output defined explicitly would
>> separate the control and data bus nodes.
>>
>>
>> Simple DPI panel
>> ================
>>
>> Here a board has a DPI panel, which is controlled via i2c. Panel nodes
>> are children of the control bus, so in this case we define the panel
>> under i2c2.
>>
>> &i2c2 {
>> dpi-lcd-panel {
>> video-source = <&dpi>;
>>
>> };
>> };
>>
>>
>> HDMI
>> ====
>>
>> OMAP has a HDMI output, but it cannot be connected directly to an HDMI
>> cable. TI uses tpd12s015 chip in its board, which provides ESD,
>> level-shifting and whatnot (I'm an SW guy, google for the chip to read
>> the details =). tpd12s015 has a few GPIOs and powers that need to be
>> controlled, so we need a driver for it.
>>
>> There's no control bus for the tpd12s015, so it's platform device. Then
>> there's the device for the HDMI monitor, and the DDC lines are connected
>> to OMAP's i2c4, thus the hdmi monitor device is a child of i2c.
>>
>> / {
>> hdmi-connector: tpd12s015 {
>> video-source = <&hdmi>;
>> };
>> };
>>
>> &i2c4 {
>> hdmi-monitor {
>> video-source = <&hdmi-connector>;
>> };
>> };
>
> So this implied we would have the following chain ?
>
> DISPC (on SoC) -> HDMI (on SoC) -> TPD12S015 (on board) -> HDMI monitor (off
> board)
Yes.
Although to be honest, I'm not sure if the TPD12S015 should be part of
the chain, or somehow separate. It's so simple, kinda pass-through IP,
that it would be nicer to have the HDMI monitor connected to the OMAP
HDMI. But this is omap specific question, not really CPF thing.
> Should we then have a driver for the HDMI monitor ?
Yes, that is my opinion:
- It would be strange if for some video chains we would have panel
devices in the end of the chain, and for some we wouldn't. I guess we'd
need some trickery to make them both work the same way if there's no
panel devices for some cases.
- While in 99% of the cases we can use a common, simple HDMI monitor
driver, there could be HDMI monitors with special features. We could
detect this monitor by reading the EDID and if it's this special case,
use a specific driver for it instead of the common HDMI driver. This is
perhaps not very likely with HDMI, but I could imagine eDP panels with
special features.
So I imagine that we could use hot-plug here. The HDMI monitor device
would not exist until a HDMI cable is connected. The SoC's HDMI driver
(or whatever is before the HDMI monitor in the chain) gets a hotplug
interrupt, and it would then add the device, which would then trigger
probe of the corresponding HDMI monitor driver.
Actually, thinking about this, what I said in the above paragraph
wouldn't work. The SoC's HDMI driver can't know what kind of device to
create, unless we have a HDMI bus and HDMI devices. Which, I think, we
shouldn't have, as HDMI monitors are usually controlled via i2c, and
thus the HDMI monitors should be i2c devices.
So I don't really know how this hotplug would work =). It's just an
idea, not a scenario I have at hand.
Tomi
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 897 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20121031/8fe83b42/attachment.pgp>
More information about the dri-devel
mailing list