[PATCH v6 4/5] dma-buf: heaps: Add CMA heap to dmabuf heaps

Andrew F. Davis afd at ti.com
Thu Jul 25 14:25:50 UTC 2019


On 7/25/19 10:11 AM, Christoph Hellwig wrote:
> On Thu, Jul 25, 2019 at 10:10:08AM -0400, Andrew F. Davis wrote:
>> Pages yes, but not "normal" pages from the kernel managed area.
>> page_to_pfn() will return bad values on the pages returned by this
>> allocator and so will any of the kernel sync/map functions. Therefor
>> those operations cannot be common and need special per-heap handling.
> 
> Well, that means this thing is buggy and abuses the scatterlist API
> and we can't merge it anyway, so it is irrelevant.
> 

Since when do scatterlists need to only have kernel virtual backed
memory pages? Device memory is stored in scatterlists and
dma_sync_sg_for_* would fail just the same when the cache ops were
attempted.


More information about the dri-devel mailing list