Tegra DRM device tree bindings

Hiroshi Doyu hdoyu at nvidia.com
Wed Jun 27 23:06:50 PDT 2012


Hi Lucas,

On Wed, 27 Jun 2012 17:59:55 +0200
Lucas Stach <dev at lynxeye.de> wrote:

> > > > > Rather than introducing a new property, how about using
> > > > > "coherent_pool=??M" in the kernel command line if necessary? I think
> > > > > that this carveout size depends on the system usage/load.
> > > > 
> > > > I was hoping that we could get away with using the CMA and perhaps
> > > > initialize it based on device tree content. I agree that the carveout
> > > > size depends on the use-case, but I still think it makes sense to
> > > > specify it on a per-board basis.
> > > 
> > > DRM driver doesn't know if it uses CMA or not, because DRM only uses
> > > DMA API.
> > 
> > So how is the DRM supposed to allocate buffers? Does it call the
> > dma_alloc_from_contiguous() function to do that? I can see how it is
> > used by arm_dma_ops but how does it end up in the driver?
> > 
> As I said before the DMA API is not a good fit for graphics drivers.
> Most of the DMA buffers used by graphics cores are long lived and big,
> so we need a special pool to alloc from to avoid eating all contiguous
> address space, as DMA API does not provide shrinker callbacks for
> clients using large amount of memory.

For contiguious address space shortage issue in DMA API, I think that
DMABUF framework can handle?

If so, is there any good example for DMABUF used in DRM?


More information about the dri-devel mailing list