Tegra DRM device tree bindings

Mark Zhang markz at nvidia.com
Thu Jun 28 18:17:42 PDT 2012


> Am Donnerstag, den 28.06.2012, 10:51 -0600 schrieb Stephen Warren:
> > On 06/28/2012 05:12 AM, Thierry Reding wrote:
> > > On Wed, Jun 27, 2012 at 05:59:55PM +0200, Lucas Stach wrote:
> > >> Am Mittwoch, den 27.06.2012, 16:44 +0200 schrieb Thierry Reding:
> > ...
> > >>> In the ideal case I would want to not have a carveout size at all.
> > >>> However there may be situations where you need to make sure some
> > >>> driver can allocate a given amount of memory. Having to specify
> > >>> this using a kernel command-line parameter is cumbersome because
> > >>> it may require changes to the bootloader or whatever. So if you
> > >>> know that a particular board always needs
> > >>> 128 MiB of carveout, then it makes sense to specify it on a
> > >>> per-board basis.
> > >>
> > >> If we go with CMA, this is a non-issue, as CMA allows to use the
> > >> contig area for normal allocations and only purges them if it
> > >> really needs the space for contig allocs.
> > >
> > > CMA certainly sounds like the most simple approach. While it may not
> > > be suited for 3D graphics or multimedia processing later on, I think
> > > we could use it at a starting point to get basic framebuffer and X
> > > support up and running. We can always move to something more
> > > advanced like TTM later.
> >
> > I thought the whole purpose of CMA was to act as the infra-structure
> > to provide buffers to 3D, camera, etc. in particular allowing sharing
> > of buffers between them. In other words, isn't CMA the memory manager?
> > If there's some deficiency with CMA for 3D graphics, it seems like
> > that should be raised with those designing CMA. Or, am I way off base
> > with my expectations of CMA?
> >
> CMA is just a way of providing large contiguous address space blocks in a dynamic
> fashion. The problem CMA solves is: we have a system with relatively low
> amounts of sysmem (like 512MB), now to ensure we can always get large
> contiguous buffers for use by GPU or VIDEO blocks, we need to set aside a
> relatively large contiguous pool (like 128MB). So we are stealing 128MB of
> memory from the system while we may or may not use it, which is bad.
> Now CMA allows to say: I may need 128MB of contig space, but the system is free
> to use it as normal memory as long as I don't really need it. If the space is really
> needed, CMA purges pages from the area and may even swap them out. So yes
> CMA is a memory allocator for contig memory.
> 
> TTM though solves more advanced matters, like buffer synchronisation between
> 3D and 2D block of hardware or syncing buffer access between GPU and CPU.
> One of the most interesting things of TTM is the ability to purge the GPU DMA
> buffers to scattered sysmem or even swap them out, if they are not currently
> used by the GPU. It then makes sure to move them in the contig space again
> when the GPU really needs them and fix up the GPU command stream with the
> new buffer address.
> 
> IMHO the best solution would be to use CMA as a flexible replacement of the
> static carveout area and put TTM on top of this to solve the needs of graphics
> drivers. We certainly don't want to reinvent the wheel inside CMA. We have
> solutions for all those things in the kernel right now, we just have to glue them
> together in a sane way.
> 

That is a great explanation. So could you explain what's the relation between IOMMU api 
and TTM(or GEM)? 
Terje said DMABUF api sits on top of IOMMU api. So for normal device drivers(such as drm), 
can forget iommu apis, just use dmabuf api is OK. If so, I wanna know does 
TTM/GEM and IOMMU are related? Or TTM/GEM uses dmabuf apis which calls iommu api 
to do memory allocation/mapping?


> Thanks,
> Lucas
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-tegra" in the body
> of a message to majordomo at vger.kernel.org More majordomo info at
> http://vger.kernel.org/majordomo-info.html


More information about the dri-devel mailing list