Tegra DRM device tree bindings

Lucas Stach dev at lynxeye.de
Thu Jun 28 01:11:15 PDT 2012


Am Donnerstag, den 28.06.2012, 09:06 +0300 schrieb Hiroshi Doyu:
> Hi Lucas,
> 
> On Wed, 27 Jun 2012 17:59:55 +0200
> Lucas Stach <dev at lynxeye.de> wrote:
> 
> > > > > > Rather than introducing a new property, how about using
> > > > > > "coherent_pool=??M" in the kernel command line if necessary? I think
> > > > > > that this carveout size depends on the system usage/load.
> > > > > 
> > > > > I was hoping that we could get away with using the CMA and perhaps
> > > > > initialize it based on device tree content. I agree that the carveout
> > > > > size depends on the use-case, but I still think it makes sense to
> > > > > specify it on a per-board basis.
> > > > 
> > > > DRM driver doesn't know if it uses CMA or not, because DRM only uses
> > > > DMA API.
> > > 
> > > So how is the DRM supposed to allocate buffers? Does it call the
> > > dma_alloc_from_contiguous() function to do that? I can see how it is
> > > used by arm_dma_ops but how does it end up in the driver?
> > > 
> > As I said before the DMA API is not a good fit for graphics drivers.
> > Most of the DMA buffers used by graphics cores are long lived and big,
> > so we need a special pool to alloc from to avoid eating all contiguous
> > address space, as DMA API does not provide shrinker callbacks for
> > clients using large amount of memory.
> 
> For contiguious address space shortage issue in DMA API, I think that
> DMABUF framework can handle?
> 
No, DMABUF is only about sharing DMA buffers between different hardware
blocks. It has no address space management at all.

All DRM drivers manage their address space on their own, either through
GEM or TTM. On the desktop the main pool for contiguous memory is on
card VRAM or a area of system memory set aside for use by the DRM
driver. Having a carveout area only for DRM use would be the same thing,
but makes the split between graphics and system memory a bit unflexible.
Currently we have no sane way for integrating DRM memory managers with
the normal DMA API. There was some discussion about DMA pools and/or
shrinkers for DMA clients, but they have not led to any written code.

DMABUF only makes a limited integration between for example V4L and DRM
possible, so they can share buffers. But we still have the situation
that DRM devices allocate from their own pool, for explained reasons and
not use the standard DMA API.



More information about the dri-devel mailing list