[PATCH/RFC v3 00/19] Common Display Framework

Tomi Valkeinen tomi.valkeinen at ti.com
Mon Sep 2 04:06:04 PDT 2013


On 09/08/13 20:14, Laurent Pinchart wrote:
> Hi everybody,
> 
> Here's the third RFC of the Common Display Framework.
> 
> I won't repeat all the background information from the versions one and two
> here, you can read it at http://lwn.net/Articles/512363/ and
> http://lwn.net/Articles/526965/.
> 
> This RFC isn't final. Given the high interest in CDF and the urgent tasks that
> kept delaying the next version of the patch set, I've decided to release v3
> before completing all parts of the implementation. Known missing items are
> 
> - documentation: kerneldoc and this cover letter should provide basic
>   information, more extensive documentation will likely make it to v4.
> 
> - pipeline configuration and control: generic code to configure and control
>   display pipelines (in a nutshell, translating high-level mode setting and
>   DPMS calls to low-level entity operations) is missing. Video and stream
>   control operations have been carried over from v2, but will need to be
>   revised for v4.
> 
> - DSI support: I still have no DSI hardware I can easily test the code on.
> 
> Special thanks go to
> 
> - Renesas for inviting me to LinuxCon Japan 2013 where I had the opportunity
>   to validate the CDF v3 concepts with Alexandre Courbot (NVidia) and Tomasz
>   Figa (Samsung).
> 
> - Tomi Valkeinen (TI) for taking the time to deeply brainstorm v3 with me.
> 
> - Linaro for inviting me to Linaro Connect Europe 2013, the discussions we had
>   there greatly helped moving CDF forward.
> 
> - And of course all the developers who showed interest in CDF and spent time
>   sharing ideas, reviewing patches and testing code.
> 
> I have to confess I was a bit lost and discouraged after all the CDF-related
> meetings during which we have discussed how to move from v2 to v3. With every
> meeting I was hoping to run the implementation through use cases of various
> interesting parties and narrow down the scope of the huge fuzzy beast that CDF
> was. With every meeting the scope actually broadened, with no clear path at
> sight anywhere.
> 
> Earlier this year I was about to drop one of the requirements on which I had
> based CDF v2: sharing drivers between DRM/KMS and V4L2. With only two HDMI
> transmitters as use cases for that feature (with only out-of-tree drivers so
> far), I just thought the involved completely wasn't worth it and that I should
> implement CDF v3 as a DRM/KMS-only helper framework. However, a seemingly
> unrelated discussion with Xilinx developers showed me that hybrid SoC-FPGA
> platforms such as the Xilinx Zynq 7000 have a larger library of IP cores that
> can be used in camera capture pipelines and in display pipelines. The two use
> cases suddenly became tens or even hundreds of use cases that I couldn't
> ignore anymore.

Should this be Common Video Framework then? ;)

> CDF v3 is thus userspace API agnostic. It isn't tied to DRM/KMS or V4L2 and
> can be used by any kernel subsystem, potentially including FBDEV (although I
> won't personally wrote FBDEV support code, as I've already advocated for FBDEV
> to be deprecated).
> 
> The code you are about to read is based on the concept of display entities
> introduced in v2. Diagrams related to the explanations below are available at
> http://ideasonboard.org/media/cdf/20130709-lce-cdf.pdf.
> 
> 
> Display Entities
> ----------------
> 
> A display entity abstracts any hardware block that sources, processes or sinks
> display-related video streams. It offers an abstract API, implemented by display
> entity drivers, that is used by master drivers (such as the main display driver)
> to query, configure and control display pipelines.
> 
> Display entities are connected to at least one video data bus, and optionally
> to a control bus. The video data busses carry display-related video data out
> of sources (such as a CRTC in a display controller) to sinks (such as a panel
> or a monitor), optionally going through transmitters, encoders, decoders,
> bridges or other similar devices. A CRTC or a panel will usually be connected
> to a single data bus, while an encoder or a transmitter will be connected to
> two data busses.
> 
> The simple linear display pipelines we find in most embedded platforms at the
> moment are expected to grow more complex with time. CDF needs to accomodate
> those needs from the start to be, if not future-proof, at least present-proof
> at the time it will get merged in to mainline. For this reason display
> entities have data ports through which video streams flow in or out, with link
> objects representing the connections between those ports. A typical entity in
> a linear display pipeline will have one (for video source and video sink
> entities such as CRTCs or panels) or two ports (for video processing entities
> such as encoders), but more ports are allowed, and entities can be linked in
> complex non-linear pipelines.
> 
> Readers might think that this model if extremely similar to the media
> controller graph model. They would be right, and given my background this is
> most probably not a coincidence. The CDF v3 implementation uses the in-kernel
> media controller framework to model the graph of display entities, with the
> display entity data structure inheriting from the media entity structure. The
> display pipeline graph topology will be automatically exposed to userspace
> through the media controller API as an added bonus. However, ussage of the
> media controller userspace API in applications is *not* mandatory, and the
> current CDF implementation doesn't use the media controller link setup
> userspace API to configure the display pipelines.

I have yet to look at the code. I'm just wondering, do you see any
downsides on using the media controller here, instead a CDF specific entity?

> While some display entities don't require any configuration (DPI panels are a
> good example), many of them are connected to a control bus accessible to the
> CPU. Control requests can be sent on a dedicated control bus (such as I2C or
> SPI) or multiplexed on a mixed control and data bus (such as DBI or DSI). To
> support both options the CDF display entity model separates the control and
> data busses in different APIs.
> 
> Display entities are abstract object that must be implemented by a real
> device. The device sits on its control bus and is registered with the Linux
> device core and matched with his driver using the control bus specific API.
> The CDF doesn't create a display entity class or bus, display entity drivers
> thus standard Linux kernel drivers using existing busses. A DBI bus is added

I have no idea what the above means =). I guess the point is that CDF
doesn't create the display entities or devices or busses. It's the
standard linux drivers will create CDF display entities?

> as part of this patch set, but strictly speaking this isn't part of CDF.
> 
> When a display entity driver probes a device it must create an instance of the
> display_entity structure, initialize it and add it to the CDF core entities
> pool. The display entity exposes abstract operations through function
> pointers, and the entity driver must implement those operations. Those
> operations can act on either the whole entity or on a given port, depending on
> the operation. They are divided in two groups, control operations and video
> operations.
> 
> 
> Control Operations
> ------------------
> 
> Control operations are called by upper-level drivers, usually in response to a
> request originating from userspace. They query or control the display entity
> state and operation. Currently defined control operations are
> 
> - get_size(), to retrieve the entity physical size (applicable to panels only)
> - get_modes(), to retrieve the video modes supported at an entity port
> - get_params(), to retrieve the data bus parameters at an entity port
> 
> - set_state(), to control the state of the entity (off, standby or on)
> - update(), to trigger a display update (for entities that implement manual
>   update, such as manual-update panels that store frames in their internal
>   frame buffer)
> 
> The last two operations have been carried from v2 and will be reworked.
> 
> 
> Pipeline Control
> ----------------
> 
> The figure on page 4 shows the control model for a linear pipeline. This
> differs significantly from CDF v2 where calls where forwarded from entity to
> entity using a Russian dolls model. v3 removes the need of neighbour awareness
> from entity drivers, simplifying the entity drivers. The complexity of pipeline
> configuration is moved to a central location called a pipeline controller
> instead of being spread out to all drivers.
> 
> Pipeline controllers provide library functions that display drivers can use to
> control a pipeline. Several controllers can be implemented to accomodate the
> needs of various pipeline topologies and complexities, and display drivers can
> even implement their own pipeline control algorithm if needed. I'm working on a
> linear pipeline controller for the next version of the patch set.
> 
> If pipeline controllers are responsible for propagating a pipeline configuration
> on all entity ports in the pipeline, entity drivers are responsible for
> propagating the configuration inside entities, from sink (input) to source
> (output) ports as illustrated on page 5. The rationale behind this is that
> knowledge of the entity internals is located in the entity driver, while
> knowledge of the pipeline belongs to the pipeline controller. The controller
> will thus configure the pipeline by performing the following steps:
> 
> - applying a configuration on sink ports of an entity
> - read the configuration that has been propagated by the entity driver on its
>   source ports
> - optionally, modify the source port configuration (to configure custom timings,
>   scaling or other parameters, if supported by the entity)
> - propagate the source port configuration to the sink ports of the next entities
>   in the pipeline and start over

First, I find "sink" and "source" somewhat confusing here. Maybe it's
just me... If I understand correctly, "sink port of an entity" means a
port on an entity which is receiving data, so, say, the port on a panel.
And "source port of an entity" is a port to which an entity writes data.

Wouldn't "input port" and "output port" be more clear?

Another thing. We have discussed this a few times, and also discussed it
the last time you were in Helsinki. But it's still a bit unclear to me
should the configuration go "downstream", as you describe, or
"upstream", as the omapdss does.

If we look at a single entity in the pipeline, I think we can describe
the two different approaches like this:

Downstream model: "Hey entity, here's the video format you will be
getting. What kind of output video format do you give for that input?"

Upstream model: "Hey entity, we need this video output from you. What
kind of input do you need to produce the output?"

I think both models have complexities/issues, but if I forget the
issues, I think the upstream model is more powerful, and maybe even more
"correct":

In the end of the pipeline, we have a monitor or a panel. We want the
monitor/panel to show a picture with some particular video mode. So the
job for the pipeline controller is to find out settings for each display
entity to produce that video mode in the end. With the downstream model
you'll start from the SoC side with some video mode, and hope that the
end result will be what's needed by the monitor/panel.

As an example, DSI video mode on OMAP (although I think the same applies
to any other SoC with DSI). The pipeline we have is like this, ->
showing the video data flow:

DISPC -> DSI -> Panel

The DISPC-DSI link is raw parallel RGB.

With the DSI transfer we can have either burst or non-burst mode. When
in non-burst mode, the DSI transfer looks pretty much like normal
parallel RGB, except that it's in serial format.

With burst mode, however, the DSI clock and horizontal timings can be
changed. The idea with burst mode is to increase the time spent in
horizontal blank period, thus reducing the time the DISPC and DSI blocks
need to be active. So we could, say, double the DSI clock, and increase
the horizontal blank accordingly.

What this means in the context of pipeline configuration is that the
DISPC needs to be configured properly to produce pixels for the DSI. If
the DSI clock and horizontal blank are increased, the DISPC pixel clock
and blank need to be increased also.

Even in non-burst mode the video mode programmed to DISPC is not always
quite the as the resulting video mode received by the panel, because the
DISPC uses pixel clock and the transfer unit is a pixel, DSI uses DSI
bus clock and a transfer unit is a byte, and these are not always in
1-to-1 match. So if one wants exactly certain DSI video mode timings,
this discrepancy needs to be taken into consideration when programming
DISPC timings.

Then again, as I said, upstream model is not without its issues either.
Say, if we want a particular output from DSI, with burst mode. The DSI
should somehow know how high pixel clock and horizontal timings DISPC
can produce before it can calculate the video mode.

I guess the only way to avoid the issues is to add all this logic into
the pipeline control? And if so, then it doesn't really matter if the
configuration is done with downstream or upstream model.

I fear a bit that adding this kind of logic into the controller means we
will add display entity specific things into the controllers. So if I
create a generic OMAP pipeline controller, which works for all the
currnet boards, and then somebody creates a new OMAP board with some
funny encoder, he'll have to create a new controller, almost like the
generic OMAP one, but with support for the funny encoder.

> Beside modifying the active configuration, the entities API will allow trying
> configurations without applying them to the hardware. As configuration of a port
> possibly depend on the configurations of the other ports, trying a configuration
> must be done at the entity level instead of the port level. The implementation
> will be based on the concept of configuration store objects that will store the
> configuration of all ports for a given entity. Each entity will have a single
> active configuration store, and test configuration stores will be created
> dynamically to try a configuration on an entity. The get and set operations
> implemented by the entity will receive a configuration store pointer, and active
> and test code paths in entity drivers will be identical, except for applying the
> configuration to the hardware for the active code path.
> 
> 
> Video Operations
> ----------------
> 
> Video operations control the video stream state on entity ports. The only
> currently defined video operation is
> 
> - set_stream(), to start (in continuous or single-shot mode) the video stream
>   on an entity port
> 
> The call model for video operations differ from the control operations model
> described above. The set_stream() operation is called directly by downstream
> entities on upstream entities (from a video data bus point of view).
> Terminating entities in a pipeline (such as panels) will usually call the
> set_stream() operation in their set_state() handler, and intermediate entities
> will forward the set_stream() call upstream.
> 
> 
> Integration
> -----------
> 
> The figure on page 8 describes how a panel driver, implemented using CDF as a
> display entity, interacts with the other components in the system. The use case
> is a simple pipeline made of a display controller and a panel.
> 
> The display controller driver receives control request from userspace through
> DRM (or FBDEV) API calls. It processes the request and calls the panel driver
> through the CDF control operations API. The panel driver will then issue
> requests on its control bus (several possible control busses are shown on the
> figure, panel drivers typically use one of them only) and call video operations
> of the display controller on its left side to control the video stream.
> 
> 
> Registration and Notification
> -----------------------------
> 
> Due to possibly complex dependencies between entities we can't guarantee that
> all entities part of the display pipeline will have been successfully probed
> when the master display controller driver is probed. For instance a panel can
> be a child of the DBI or DSI bus controlled by the display device, or use a
> clock provided by that device. We can't defer the display device probe until
> the panel is probed and also defer the panel device probe until the display
> device is probed. For this reason we need a notification system that allows
> entities to register themselves with the CDF core, and display controller
> drivers to get notified when entities they need are available.

I don't understand this one. Do you have a example setup that shows the
problem?

I think we can just use the EPROBE_DEFER here. A display entity requires
some resources, like GPIOs and regulators, and with CDF, video sources.
If those resources are not yet available, the driver can just return
EPROBE_DEFER.

Or is there some kind of two-way dependency in your model, when using
DBI? We don't have such in omapdss, so I may not quite understand the
issue or the need for the two-way dependency.

The only case where I can see a dependency problem is when two display
entities produce a resource, used by the other. So, say, we have an
encoder and a panel, and the panel produces a clock used by the encoder,
and the encoder produces a video signal used by the panel. I haven't
seen such setups in real hardware, though

> The notification system has been completely redesigned in v3. This version is
> based on the V4L2 asynchronous probing notification code, with large parts of
> the code shamelessly copied. This is an interim solution to let me play with
> the notification code as needed by CDF. I'm not a fan of code duplication, and
> will work on merging the CDF and V4L2 implementations in a later stage when
> CDF will reach a mature enough state.
> 
> CDF manages a pool of entities and a list of notifiers. Notifiers are
> registered by master display drivers with an array of entities match
> descriptors. When an entity is added to the CDF entities pool, all notifiers
> are searched for a match. If a match is found, the corresponding notifier is
> called to notify the master display driver.
> 
> The two currently supported match methods are platform match, which uses
> device names, and DT match, which uses DT node pointers. More match method
> might be added later if needed. Two helper functions exist to build a notifier
> from a list of platform device names (in the non-DT case) or a DT
> representation of the display pipeline topology.
> 
> Once all required entities have been successfully found, the master display
> driver is responsible for creating media controller links between all entities
> in the pipeline. Two helper functions are also available to automate that
> process, one for the non-DT case and one for the DT case. Once again some
> DT-related code has been copied from the V4L2 DT code, I will work on merging
> both in a future version.
> 
> Note that notification brings a different issue after registration, as display
> controller and display entity drivers would take a reference to each other.
> Those circular references would make driver unloading impossible. One possible
> solution to this problem would be to simulate an unplug event for the display
> entity, to force the display driver to release the dislay entities it uses. We
> would need a userspace API for that though. Better solutions would of course
> be welcome.
> 
> 
> Device Tree Bindings
> --------------------
> 
> CDF entities device tree bindings are not documented yet. They describe both
> the graph topology and entity-specific information. The graph description uses
> the V4L2 DT bindings (which are actually not V4L2-specific) specified at
> Documentation/devicetree/bindings/media/video-interfaces.txt. Entity-specific
> information will be described in individual DT bindings documentation. The DPI
> panel driver uses the display timing bindings documented in
> Documentation/devicetree/bindings/video/display-timing.txt.
> 
> 
> 
> 
> Please note that most of the display entities on devices I own are just dumb
> panels with no control bus, and are thus not the best candidates to design a
> framework that needs to take complex panels' needs into account. This is why I
> hope to see you using the CDF with your display device and tell me what needs to
> be modified/improved/redesigned.
> 
> This patch set is split as follows:
> 
> - The first patch fixes a Kconfig namespace issue with the OMAP DSS panels. It
>   could be applied already independently of this series.
> - Patches 02/19 to 07/19 add the CDF core, including the notification system
>   and the graph and OF helpers.
> - Patch 08/19 adds a MIPI DBI bus. This isn't part of CDF strictly speaking,
>   but is needed for the DBI panel drivers.
> - Patches 09/19 to 13/19 add panel drivers, a VGA DAC driver and a VGA
>   connector driver.
> - Patches 14/19 to 18/19 add CDF-compliant reference board code and DT for the
>   Renesas Marzen and Lager boards.
> - Patch 19/19 port the Renesas R-Car Display Unit driver to CDF.
> 
> The patches are available in my git tree at
> 
>     git://linuxtv.org/pinchartl/fbdev.git cdf/v3
>     http://git.linuxtv.org/pinchartl/fbdev.git/shortlog/refs/heads/cdf/v3
> 
> For convenience I've included modifications to the Renesas R-Car Display Unit
> driver to use the CDF. You can read the code to see how the driver uses CDF to
> interface panels. Please note that the rcar-du-drm implementation is still
> work in progress, its set_stream operation implementation doesn't enable and
> disable the video stream yet as it should.
> 
> As already mentioned in v2, I will appreciate all reviews, comments,
> criticisms, ideas, remarks, ... If you can find a clever way to solve the
> cyclic references issue described above I'll buy you a beer at the next
> conference we will both attend. If you think the proposed solution is too
> complex, or too simple, I'm all ears, but I'll have more arguments this time
> than I had with v2 :-)

I'll tweak this to work with omapdss, like I did for the v2. Although
I'll probably remove at least the DBI bus and the notification system as
the first thing I do, unless you can convince me otherwise =).

 Tomi


-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 901 bytes
Desc: OpenPGP digital signature
URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20130902/cb22c057/attachment-0001.pgp>


More information about the dri-devel mailing list