[PATCH v2 07/10] drm: omapdrm: Fix incorrect usage of the term 'cache coherency'

Laurent Pinchart laurent.pinchart at ideasonboard.com
Mon Apr 24 14:25:53 UTC 2017


Hi Tomi,

On Monday 24 Apr 2017 13:44:23 Tomi Valkeinen wrote:
> On 21/04/17 00:33, Laurent Pinchart wrote:
> > The is_cache_coherent() function currently returns true when the mapping
> > is not cache-coherent. This isn't a bug as such as the callers interpret
> > cache-coherent as meaning that the driver has to handle the coherency
> > manually, but it is nonetheless very confusing. Fix it and add a bit
> > more documentation to explain how cached buffers are handled.
> > 
> > Signed-off-by: Laurent Pinchart <laurent.pinchart at ideasonboard.com>
> > ---
> > Changes since v1:
> > 
> > - Fixed one mistakenly inverted cache coherency check
> > ---
> > 
> >  drivers/gpu/drm/omapdrm/omap_gem.c | 22 +++++++++++++++-------
> >  1 file changed, 15 insertions(+), 7 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c
> > b/drivers/gpu/drm/omapdrm/omap_gem.c index d6d38e569fff..43b60a146edf
> > 100644
> > --- a/drivers/gpu/drm/omapdrm/omap_gem.c
> > +++ b/drivers/gpu/drm/omapdrm/omap_gem.c
> > @@ -719,16 +719,21 @@ int omap_gem_roll(struct drm_gem_object *obj,
> > uint32_t roll)> 
> >   * Memory Management & DMA Sync
> >   */
> > 
> > -/**
> > - * shmem buffers that are mapped cached can simulate coherency via using
> > - * page faulting to keep track of dirty pages
> > +/*
> > + * shmem buffers that are mapped cached are not coherent.
> 
> We use shmem only with TILER. Not all SoCs have TILER (and you can
> disable TILER on the SoCs that have it).
> 
> As this patch is more or less a cleanup, I'm not sure if the above
> should be part of this patch. But it makes me wonder: if we don't use
> shmem, we use dma_alloc_writecombine.

Only for OMAP_BO_SCANOUT buffers. Other buffers will still be allocated 
through shmem.

> Do we still end up mapping it to the userspace as cached?

Well, in that case, we actually end up not mapping it at all if the user 
requests cached mapping :-) omap_gem_mmap_obj() will return with a WARN_ON() 
and omap_gem_pin() (which used to be omap_gem_get_paddr()) will return -
EINVAL.

I believe we should disallow non-contiguous buffers without a DMM at 
allocation time. As for cached mapping of contiguous buffers, the dirty page 
tracking implementation requires shmem at the moment. This could be fixed, but 
isn't trivial. Do you see value in doing so, or should cached mappings of 
contiguous buffers be disallowed for the time being ?

> I think having two different caching types for the same memory is illegal.

It used to be documented as leading to unspecified behaviour in the ARM ARM, 
but I think that has been changed. If I'm not mistaken it doesn't cause any 
problem in practice, at least on the TI platforms I tested back when dealing 
with cache on OMAP3 :-)

> > + *
> > + * We keep track of dirty pages using page faulting to perform cache
> > management.
> > + * When a page is mapped to the CPU in read/write mode the device can't
> > access
> > + * it and omap_obj->dma_addrs[i] is NULL. When a page is mapped to the
> > device
> > + * the omap_obj->dma_addrs[i] is set to the DMA address, and the page is
> > + * unmapped from the CPU.
> >   */
> >  
> >  static inline bool is_cached_coherent(struct drm_gem_object *obj)
> >  {
> >  	struct omap_gem_object *omap_obj = to_omap_bo(obj);
> > 
> > -	return (omap_obj->flags & OMAP_BO_MEM_SHMEM) &&
> > -		((omap_obj->flags & OMAP_BO_CACHE_MASK) == OMAP_BO_CACHED);
> > +	return !((omap_obj->flags & OMAP_BO_MEM_SHMEM) &&
> > +		((omap_obj->flags & OMAP_BO_CACHE_MASK) == OMAP_BO_CACHED));
> 
> Regardless of whether we support non-shmem cached buffers, isn't the
> above overly complex? Why can't we just check for OMAP_BO_CACHED? Isn't
> that the only case where the buffer is not coherent? At the moment this
> function sounds more like "is_shmem_cached_coherent".

You're right. I propose fixing that in an additional patch to avoid potential 
changes to the behaviour in this one.

-- 
Regards,

Laurent Pinchart



More information about the dri-devel mailing list