[PATCH v5 1/5] drm: xlnx: Xilinx DRM KMS module

Hyun Kwon hyun.kwon at xilinx.com
Fri Feb 23 02:23:18 UTC 2018


Hi Laurent,

On Thu, 2018-02-22 at 05:40:50 -0800, Laurent Pinchart wrote:
> Hi Hyun,
> 
> On Thursday, 22 February 2018 04:50:42 EET Hyun Kwon wrote:
> > On Wed, 2018-02-21 at 15:17:25 -0800, Laurent Pinchart wrote:
> > > On Wednesday, 7 February 2018 03:36:36 EET Hyun Kwon wrote:
> > >> Xilinx has various platforms for display, where users can create
> > >> using multiple IPs in the programmable FPGA fabric, or where
> > >> some hardened piepline is available on the chip. Furthermore,
> > > 
> > > s/piepline/pipeline/
> > > 
> > >> hardened pipeline can also interact with soft logics in FPGA.
> > >> 
> > >> The Xilinx DRM KMS module is to integrate multiple subdevices and
> > >> to represent the entire pipeline as a single DRM device. The module
> > >> includes helper (ex, framebuffer and gem helpers) and
> > >> glue logic (ex, crtc interface) functions.
> > >> 
> > >> Signed-off-by: Hyun Kwon <hyun.kwon at xilinx.com>
> > >> Acked-by: Daniel Vetter <daniel.vetter at ffwll.ch>
> > >> ---
> > >> v5
> > >> - Redefine xlnx_pipeline_init()
> > >> v4
> > >> - Fix a bug in of graph binding handling
> > >> - Remove vblank callbacks from xlnx_crtc
> > >> - Remove the dt binding. This module becomes more like a library.
> > >> - Rephrase the commit message
> > >> v3
> > >> - Add Laurent as a maintainer
> > >> - Fix multiple-reference on gem objects
> > >> v2
> > >> - Change the SPDX identifier format
> > >> - Merge patches(crtc, gem, fb) into single one
> > >> v2 of xlnx_drv
> > >> - Rename kms to display in xlnx_drv
> > >> - Replace some xlnx specific fb helper with common helpers in xlnx_drv
> > >> - Don't set the commit tail callback in xlnx_drv
> > >> - Support 'ports' graph binding in xlnx_drv
> > >> v2 of xlnx_fb
> > >> - Remove wrappers in xlnx_fb
> > >> - Replace some functions with drm core helpers in xlnx_fb
> > >> ---
> > >> ---
> > >> 
> > >>  MAINTAINERS                      |   9 +
> > >>  drivers/gpu/drm/Kconfig          |   2 +
> > >>  drivers/gpu/drm/Makefile         |   1 +
> > >>  drivers/gpu/drm/xlnx/Kconfig     |  12 +
> > >>  drivers/gpu/drm/xlnx/Makefile    |   2 +
> > >>  drivers/gpu/drm/xlnx/xlnx_crtc.c | 177 ++++++++++++++
> > >>  drivers/gpu/drm/xlnx/xlnx_crtc.h |  70 ++++++
> > >>  drivers/gpu/drm/xlnx/xlnx_drv.c  | 501 +++++++++++++++++++++++++++++++++
> > >>  drivers/gpu/drm/xlnx/xlnx_drv.h  |  33 +++
> > >>  drivers/gpu/drm/xlnx/xlnx_fb.c   | 298 +++++++++++++++++++++++
> > >>  drivers/gpu/drm/xlnx/xlnx_fb.h   |  33 +++
> > >>  drivers/gpu/drm/xlnx/xlnx_gem.c  |  47 ++++
> > >>  drivers/gpu/drm/xlnx/xlnx_gem.h  |  26 ++
> > >>  13 files changed, 1211 insertions(+)
> > >>  create mode 100644 drivers/gpu/drm/xlnx/Kconfig
> > >>  create mode 100644 drivers/gpu/drm/xlnx/Makefile
> > >>  create mode 100644 drivers/gpu/drm/xlnx/xlnx_crtc.c
> > >>  create mode 100644 drivers/gpu/drm/xlnx/xlnx_crtc.h
> > >>  create mode 100644 drivers/gpu/drm/xlnx/xlnx_drv.c
> > >>  create mode 100644 drivers/gpu/drm/xlnx/xlnx_drv.h
> > >>  create mode 100644 drivers/gpu/drm/xlnx/xlnx_fb.c
> > >>  create mode 100644 drivers/gpu/drm/xlnx/xlnx_fb.h
> > >>  create mode 100644 drivers/gpu/drm/xlnx/xlnx_gem.c
> > >>  create mode 100644 drivers/gpu/drm/xlnx/xlnx_gem.h
> 
> [snip]
> 
> > >> diff --git a/drivers/gpu/drm/xlnx/xlnx_crtc.c
> > >> b/drivers/gpu/drm/xlnx/xlnx_crtc.c new file mode 100644
> > >> index 0000000..de83905
> > >> --- /dev/null
> > >> +++ b/drivers/gpu/drm/xlnx/xlnx_crtc.c
> 
> [snip]
> 
> > >> +uint32_t xlnx_crtc_helper_get_format(struct xlnx_crtc_helper *helper)
> > > 
> > > You can use the u32 type within the kernel.
> > > 
> > >> +{
> > >> +	struct xlnx_crtc *crtc;
> > >> +	u32 format = 0, tmp;
> > >> +
> > >> +	list_for_each_entry(crtc, &helper->xlnx_crtcs, list) {
> > >> +		if (crtc->get_format) {
> > >> +			tmp = crtc->get_format(crtc);
> > >> +			if (format && format != tmp)
> > >> +				return 0;
> > >> +			format = tmp;
> > > 
> > > Same comments regarding the tmp variable and the list locking issue.
> > > 
> > >> +		}
> > >> +	}
> > >> +
> > >> +	return format;
> > > 
> > > Does this mean that your CRTCs support a single format each only ?
> 
> Does it ? :-)
>  
> > >> +}
> 
> [snip]
> 
> > >> diff --git a/drivers/gpu/drm/xlnx/xlnx_drv.c
> > >> b/drivers/gpu/drm/xlnx/xlnx_drv.c new file mode 100644
> > >> index 0000000..8f0e357
> > >> --- /dev/null
> > >> +++ b/drivers/gpu/drm/xlnx/xlnx_drv.c
> 
> [snip]
> 
> > >> +static int xlnx_of_component_probe(struct device *master_dev,
> > >> +				   int (*compare_of)(struct device *, void *),
> > >> +				   const struct component_master_ops *m_ops)
> > > 
> > > As there's a single user of this function, do you need to pass compare_of
> > > as a parameter ? Couldn't you move xlnx_compare_of() above and use it
> > > directly ?
> > > 
> > >> +{
> > >> +	struct device *dev = master_dev->parent;
> > >> +	struct device_node *ep, *port, *remote, *parent;
> > >> +	struct component_match *match = NULL;
> > >> +	int i;
> > > 
> > > i is never negative, you can make it unsigned.
> > > 
> > >> +	if (!dev->of_node)
> > >> +		return -EINVAL;
> > >> +
> > >> +	component_match_add(master_dev, &match, compare_of, dev->of_node);
> > >> +
> > >> +	for (i = 0; ; i++) {
> > >> +		port = of_parse_phandle(dev->of_node, "ports", i);
> > > 
> > > I don't see the ports property documented in the DT bindings in patch 2/5.
> > > I assume that's because it has been removed in v4 as stated in the
> > > changelog :-) You shouldn't make use of it then. I've skipped reviewing
> > > the logic for the rest of this function as I need to see the
> > > corresponding DT bindings.
> > 
> > Good catch! I needed some discussion on this. :-) This logic is actually
> > intended for future extention and used for pipelines with multiple
> > components (ex, this driver with soft IPs, or multiple soft IPs), where the
> > dt describes its own topology of the pipeline. Some part of this logic is
> > actively used with downstream drivers, but I can simplify this logic
> > (remove logic not needed by this set) and revert back when it's actually
> > needed.
> > 
> > Then as you commented, the dt binding for this module is gone, so the
> > behavior of this logic is documented in the function description, treating
> > the function as a helper. So my expectation is that the corresponding
> > driver that calls this function should document the dt binding in its own.
> > And the DP driver in this set doesn't support such binding without any soft
> > IP drivers, thus it doesn't document it as of now.
> 
> I suspected so :-) I think the topology should indeed be described in DT in 
> that case. However, looking at the code (as that's my only source of 
> information given the lack of DT bindings documentation), it seems that you 
> plan to use a ports property that contains a list of phandles to components. I 
> believe we should instead use the OF graph bindings to model the pipeline in 
> DT, which would require rewriting this function. If you want to merge this 
> series with DT bindings for more complex pipelines I would thus drop this 
> code.

Sure. I'm fine dropping this for now. To note, this is based on
drm_of_component_probe(), and its bindings are structured in a way where
the master has phandles to crtc nodes, and between crtc nodes and encoder
nodes, the graph bindings are used. I believe bindings make sense, thus
this follows.

> 
> I have related comments on the ZynqMP DP subsystem bindings, I'll reply to 
> patch 2/5.
> 
> > >> +		if (!port)
> > >> +			break;
> > >> +
> > >> +		parent = port->parent;
> > >> +		if (!of_node_cmp(parent->name, "ports"))
> > >> +			parent = parent->parent;
> > >> +		parent = of_node_get(parent);
> > > 
> > > I've recently found out that dereferencing a device_node parent directly
> > > is prone to race conditions. You should use of_get_parent() instead (or
> > > of_get_next_parent() as appropriate to make device_node reference handling
> > > easier).
> > > 
> > >> +
> > >> +		if (!of_device_is_available(parent)) {
> > >> +			of_node_put(parent);
> > >> +			of_node_put(port);
> > >> +			continue;
> > >> +		}
> > >> +
> > >> +		component_match_add(master_dev, &match, compare_of, parent);
> > >> +		of_node_put(parent);
> > >> +		of_node_put(port);
> > >> +	}
> > >> +
> > >> +	parent = dev->of_node;
> > >> +	for (i = 0; ; i++) {
> > >> +		parent = of_node_get(parent);
> > >> +		if (!of_device_is_available(parent)) {
> > >> +			of_node_put(parent);
> > >> +			continue;
> > >> +		}
> > >> +
> > >> +		for_each_endpoint_of_node(parent, ep) {
> > >> +			remote = of_graph_get_remote_port_parent(ep);
> > >> +			if (!remote || !of_device_is_available(remote) ||
> > >> +			    remote == dev->of_node) {
> > >> +				of_node_put(remote);
> > >> +				continue;
> > >> +			} else if (!of_device_is_available(remote->parent)) {
> > >> +				dev_warn(dev, "parent dev of %s unavailable\n",
> > >> +					 remote->full_name);
> > >> +				of_node_put(remote);
> > >> +				continue;
> > >> +			}
> > >> +			component_match_add(master_dev, &match, compare_of,
> > >> +					    remote);
> > >> +			of_node_put(remote);
> > >> +		}
> > >> +		of_node_put(parent);
> > >> +
> > >> +		port = of_parse_phandle(dev->of_node, "ports", i);
> > >> +		if (!port)
> > >> +			break;
> > >> +
> > >> +		parent = port->parent;
> > >> +		if (!of_node_cmp(parent->name, "ports"))
> > >> +			parent = parent->parent;
> > >> +		of_node_put(port);
> > >> +	}
> > >> +
> > >> +	return component_master_add_with_match(master_dev, m_ops, match);
> > >> +}
> 
> [snip]
> 
> > >> +/* bitmap for master id */
> > >> +static u32 xlnx_master_ids = GENMASK(31, 0);
> > >> +
> > >> +/**
> > >> + * xilinx_drm_pipeline_init - Initialize the drm pipeline for the
> > >> device
> > >> + * @pdev: The platform device to initialize the drm pipeline device
> > >> + *
> > >> + * This function initializes the drm pipeline device, struct
> > >> drm_device,
> > >> + * on @pdev by creating a logical master platform device. The logical
> > >> platform
> > >> + * device acts as a master device to bind slave devices and represents
> > >> + * the entire pipeline.
> > > 
> > > I'm sorry to ask this question so late, but couldn't you do without that
> > > platform device ? I don't really see what it brings you.
> > 
> > It is fine without this in current set where there is only one component,
> > but it doesn't scale with complex pipelines where soft and hardened IP
> > drivers can be mixed. There can be cases where multiple master devices
> > exist in a single pipeline, and this is intended to unifiy and simplify the
> > model by creating a logical master and allowing all drivers to be bound as
> > regular components.
> 
> In order to understand how you intend this to work with more complex 
> pipelines, could you give me a real life example that can serve as a basis for 
> discussions ?

The DP driver in this set is single component pipeline. The device node doesn't
reference any external nodes:

	single ZynqMP DP component

Here the DP driver would create a logical master, and the master binds with
DP.

There can be soft IP pipelines with multiple components (each one below is
an independent pipeline):

	Soft IP Mixer component (crtc) <-> SDI tx component (encoder)

	Soft IP Mixer component (crtc) <-> DSI tx component (encoder)

	SOft IP simple DMA (crtc) <-> DSI tx node (encoder)

	...

Here, any CRTC driver would create a logical master, and the master binds with
both crtc and encoder components accordingly.

Those two can be mixed, where output of the mixer is connected to DP input:

					ZynqMP DP component
					     |
	Soft IP Mixer component (crtc) -------

In this case, only one master needs to be created and bind with mixer component
as well as DP component.

So, my intention is to have a single logical master for one entire pipeline
and make all other devices in the pipeline as regular components to be bound
with the master.

Thanks,
-hyun

> 
> > >> + * The logical master uses the port bindings of the calling device to
> > >> + * figure out the pipeline topology.
> > >> + *
> > >> + * Return: the logical master platform device if the drm device is
> > >> initialized
> > >> + * on @pdev. Error code otherwise.
> > >> + */
> > >> +struct platform_device *xlnx_drm_pipeline_init(struct platform_device
> > >> *pdev)
> > >> +{
> > >> +	struct platform_device *master;
> > >> +	int id, ret;
> > >> +
> > > 
> > > Is there any risk of a race between concurrent calls to this function, or
> > > to this function and xlnx_drm_pipeline_exit() ?
> > > 
> > >> +	id = ffs(xlnx_master_ids);
> > >> +	if (!id)
> > >> +		return ERR_PTR(-ENOSPC);
> > >> +
> > >> +	master = platform_device_alloc("xlnx-drm", id - 1);
> > >> +	if (!master)
> > >> +		return ERR_PTR(-ENOMEM);
> > >> +
> > >> +	master->dev.parent = &pdev->dev;
> > >> +	ret = platform_device_add(master);
> > >> +	if (ret)
> > >> +		goto err_out;
> > > 
> > > As this is the only location where you jump to the error path I would
> > > inline it.
> > > 
> > >> +
> > >> +	WARN_ON(master->id != id - 1);
> > >> +	xlnx_master_ids &= ~BIT(master->id);
> > >> +	return master;
> > >> +
> > >> +err_out:
> > >> +	platform_device_unregister(master);
> > >> +	return ERR_PTR(ret);
> > >> +}
> > >> +EXPORT_SYMBOL_GPL(xlnx_drm_pipeline_init);
> > >> +
> > >> +/**
> > >> + * xilinx_drm_pipeline_exit - Release the drm pipeline for the device
> > >> + * @master: The master pipeline device to release
> > >> + *
> > >> + * Release the logical pipeline device returned by
> > >> xlnx_drm_pipeline_init().
> > >> + */
> > >> +void xlnx_drm_pipeline_exit(struct platform_device *master)
> > >> +{
> > >> +	xlnx_master_ids |= BIT(master->id);
> > >> +	platform_device_unregister(master);
> > >> +}
> > >> +EXPORT_SYMBOL_GPL(xlnx_drm_pipeline_exit);
> 
> [snip]
> 
> -- 
> Regards,
> 
> Laurent Pinchart
> 


More information about the dri-devel mailing list