[PATCH 1/2] drm: add cache support for arm64

Will Deacon will at kernel.org
Thu Aug 8 10:32:57 UTC 2019


On Thu, Aug 08, 2019 at 11:20:53AM +0100, Mark Rutland wrote:
> On Thu, Aug 08, 2019 at 09:58:27AM +0200, Christoph Hellwig wrote:
> > On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote:
> > > For arm64, we can tear down portions of the linear map, but that has to
> > > be done explicitly, and this is only possible when using rodata_full. If
> > > not using rodata_full, it is not possible to dynamically tear down the
> > > cacheable alias.
> > 
> > Interesting.  For this or next merge window I plan to add support to the
> > generic DMA code to remap pages as uncachable in place based on the
> > openrisc code.  Aѕ far as I can tell the requirement for that is
> > basically just that the kernel direct mapping doesn't use PMD or bigger
> > mapping so that it supports changing protection bits on a per-PTE basis.
> > Is that the case with arm64 + rodata_full?
> 
> Yes, with the added case that on arm64 we can also have contiguous
> entries at the PTE level, which we also have to disable.
> 
> Our kernel page table creation code does that for rodata_full or
> DEBUG_PAGEALLOC. See arch/arm64/mmu.c, in map_mem(), where we pass
> NO_{BLOCK,CONT}_MAPPINGS down to our pagetable creation code.

FWIW, we made rodata_full the default a couple of releases ago, so if
solving the cacheable alias for non-cacheable DMA buffers requires this
to be present, then we could probably just refuse to probe non-coherent
DMA-capable devices on systems where rodata_full has been disabled.

Will


More information about the dri-devel mailing list