[RFC PATCH v2] android: ion: How to properly clean caches for uncached allocations

Brian Starkey Brian.Starkey at arm.com
Thu Nov 29 14:14:05 UTC 2018


On Wed, Nov 28, 2018 at 11:03:37PM -0800, Liam Mark wrote:
> On Wed, 28 Nov 2018, Brian Starkey wrote:
> > On Tue, Nov 27, 2018 at 10:46:07PM -0800, Liam Mark wrote:
> > > On Tue, 27 Nov 2018, Brian Starkey wrote:
> > > > On Mon, Nov 26, 2018 at 08:59:44PM -0800, Liam Mark wrote:
> > > > > On Tue, 20 Nov 2018, Brian Starkey wrote:

[snip]

> > > > 
> > > 
> > > Sounds like you are suggesting using carveouts to support uncached?
> > > 
> > 
> > No, I'm just saying that ion can't give out uncached _CPU_ mappings
> > for pages which are already mapped on the CPU as cached.

Probably this should have been: s/can't/shouldn't/

> > 
> 
> Okay then I guess I am not clear on where you would get this memory 
> which doesn't have a cached kernel mapping.
> It sounded like you wanted to define sections of memory in the DT as not 
> mapped in the kernel and then hand this memory to 
> dma_declare_coherent_memory (so that it can be managed) and then use an 
> ION heap as the allocator.  If the memory was defined this way it sounded 
> a lot like a carveout. But I guess you have some thoughts on how this 
> memory which doesn't have a kernel mapping can be made available for general
> use (for example available in buddy)?
> 
> Perhaps you were thinking of dynamically removing the kernel mappings 
> before handing it out as uncached, but this would have a general system 
> performance impact as this memory could come from anywhere so we would 
> quickly lose our 1GB block mappings (and likely many of our 2MB block 
> mappings as well).
> 

All I'm saying, with respect to non-cached memory and mappings, is
this:

I don't think ion should create non-cached CPU mappings of memory
which is mapped in the kernel as cached.

By extension, that means that in my opinion, the only way userspace
should be able to get a non-cached mapping, is by allocating from a
carveout.

However, I don't think this should be what we do in our complicated
media-heavy systems - carveouts are clearly impractical, as is
removing memory from the kernel map. What I think we should do, is
always do CPU access via cached mappings, for memory which is mapped
in the kernel as cached.

[snip]

> > 
> 
> I am not sure I am properly understanding as this is what my V2 patch 
> does, then when it gets an opportunity it allows the memory to be 
> re-mapped as uncached.

It's the remapping as uncached part which I'm not so keen on. It just
seems rather fragile to have mappings for the same memory with
different attributes around.

> 
> Or are you perhaps suggesting that if the memory is allocated from a 
> cached region then it always remains as cached, so only provide uncached 
> if it was allocated from an uncached region? If so I view all memory 
> available to the ION system heap for uncached allocations as having a 
> cached mapping (since it is all part of the kernel logical mappigns), so I 
> can't see how this would ever be able to support uncached allocations.

Yeah, that's exactly what I'm saying. The system heap should not
allow uncached allocations, and, memory allocated from the system heap
should always be mapped as cached for CPU accesses.

Non-cached allocations would only be allowed from carveouts (but as
discussed, I don't think carveouts are a practical solution for the
general case).

The summary of my proposal is that instead of focussing on getting
non-cached allocations, we should make cached allocations work better,
so that non-cached aliases of cached memory aren't required.

[snip]

> 
> Unfortunately if are only using cached mappings it isn't only the first 
> time you dma map the buffer you need to do cache maintenance, you need to 
> almost always do it because you don't know what CPU access happened (or 
> will happen) without a device.

I think you can always know if CPU _has_ accessed the buffer - in
begin_cpu_access, ion can set a flag, which it checks in map_dma_buf.
If that flag says it's been touched, then a cache clean is needed.

Of course you can't predict the future - there's no way to know if the
CPU _will_ access the buffer - which I think is what you're getting at
below.

> Explained more below.
> 
> > > But with this cached memory you get poor performance because you are 
> > > frequently doing cache mainteance uncessarily because there *could* be CPU access.
> > > 
> > > The reason we want to use uncached allocations, with uncached mappings, is 
> > > to avoid all this uncessary cache maintenance.
> > > 
> > 
> > OK I think this is the key - you don't actually care whether the
> > mappings are non-cached, you just don't want to pay a sync penalty if
> > the CPU never touched the buffer.
> > 
> > In that case, then to me the right thing to do is make ion use
> > dma_map_sg_attrs(..., DMA_ATTR_SKIP_CPU_SYNC) in ion_map_dma_buf(), if
> > it knows that the CPU hasn't touched the buffer (which it does - from
> > {begin,end}_cpu_access).
> > 
> 
> Unfortunately that isn't the case we are trying to optimize for,  we 
> aren't trying to optimize for the case where CPU *never* touches the 
> buffer we are trying to optimize for the case where the CPU may *rarely* 
> touch the buffer.
> 
> If a client allocates cached memory the driver calling dma map and dma 
> unmap has no way of knowing if at some pointe further down the pipeline 
> there will be some userspace module which will attempt to do some kind
> of CPU access (example image library post processing).  This userspace 
> moduel will call the required DMA_BUF_IOCTL_SYNC  IOCTLs, however there 
> may no longer be a device attached, therefore these calls won't 
> necessarily do the appropriate cache maintenance.

(as a slight aside: Is cache maintenance really slower than the CPU
running image processing algorithms on a non-cached mapping?
Intuitively it seems that doing processing on a cached mapping with
cache maintenance should far outperform direct non-cached access. I
understand that this isn't the real issue, and what you really care
about is being able to do device<->device operation without paying a
cache maintenance penalty. I'm just surprised that CPU processing on
non-cached mappings isn't *so bad* that it makes non-cached CPU access
totally untenable)

> 
> So what this means is that if a cached buffers is used you have to at 
> least  always to a cache invalidating when dma unmapping (from a device 
> which isn't io-coherrent that did a write)  otherwise there could be a CPU 
> attempted to read that data using a cached mapping which could end up 
> reading a stale cache line (for example acquired through speculative 
> access).

OK now I'm with you. As you say, before CPU access you would need to
invalidate, and the only way that's currently possible (at least on
arm64) is in unmap_dma_buf - so you're paying an invalidate penalty on
every unmap. That is the only penalty though; there's no need to do a
clean on map_dma_buf unless the CPU really did touch it.

With your patch I think you are still doing that invalidation right?
Is the performance of this patch OK, or you'll follow up with skipping
the invalidation?

It does seem a bit strange to me that begin_cpu_access() with no
device won't even invalidate the CPU cache. I had a bit of a poke
around, and that seems to be relatively specific to the arm64 dummy
ops. I'd have thought defaulting to dma_noncoherent_ops would work
better, but I don't know the specifics of those decisions.

If it were possible to skip the invalidation in unmap_dma_buf, would
cached mappings work for you? I think it should even be faster (in all
cases) than any non-cached approach.

Cheers,
-Brian

> 
> This frequent uncessary cache maintenance adds a significant performance 
> impact and that is why we use uncached memory because it allows us to skip 
> all this cache maintenance.
> Basically your driver can't predict the future so it has to play it safe 
> when cached ION buffers are involved.
> 
> Liam
> 
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
> a Linux Foundation Collaborative Project


More information about the dri-devel mailing list