[PATCH v3 1/1] drm: msm: Replace dma_map_sg with dma_sync_sg*

Christoph Hellwig hch at lst.de
Thu Nov 29 18:33:34 UTC 2018


On Thu, Nov 29, 2018 at 06:09:05PM +0100, Daniel Vetter wrote:
> What kind of abuse do you expect? It could very well be that gpu folks
> call that "standard use case" ... At least on x86 with the i915 driver
> we pretty much rely on architectural guarantees for how cache flushes
> work very much. Down to userspace doing the cache flushing for
> mappings the kernel has set up.

Mostly the usual bypasses of the DMA API because people know better
(and with that I don't mean low-level IOMMU API users, but "creative"
direct mappings).

> > As for the buffer sharing: at least for the DMA API side I want to
> > move the current buffer sharing users away from dma_alloc_coherent
> > (and coherent dma_alloc_attrs users) and the remapping done in there
> > required for non-coherent architectures.  Instead I'd like to allocate
> > plain old pages, and then just dma map them for each device separately,
> > with DMA_ATTR_SKIP_CPU_SYNC passed for all but the first user to map
> > or last user to unmap.  On the iommu side it could probably work
> > similar.
> 
> I think this is what's often done. Except then there's also the issue
> of how to get at the cma allocator if your device needs something
> contiguous. There's a lot of that still going on in graphics/media.

Being able to dip into CMA and mayb iommu coalescing if we want to
get fancy is indeed the only reason for this API.  If we just wanted
to map pages we could already do that now with just a little bit
of boilerplate code (and quite a few drivers do - just adding this
new API will remove tons of code).


More information about the dri-devel mailing list