[Nouveau] [PATCH 3/6] drm/nouveau: hook up cache sync functions
Lucas Stach
dev at lynxeye.de
Wed Aug 28 09:58:37 PDT 2013
Am Mittwoch, den 28.08.2013, 12:43 -0400 schrieb Konrad Rzeszutek Wilk:
> On Wed, Aug 28, 2013 at 02:00:47AM +0200, Lucas Stach wrote:
> > Signed-off-by: Lucas Stach <dev at lynxeye.de>
> > ---
> > drivers/gpu/drm/nouveau/nouveau_bo.c | 4 ++++
> > drivers/gpu/drm/nouveau/nouveau_gem.c | 5 +++++
> > 2 files changed, 9 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
> > index af20fba..f4a2eb9 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_bo.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
> > @@ -411,6 +411,10 @@ nouveau_bo_validate(struct nouveau_bo *nvbo, bool interruptible,
> > {
> > int ret;
> >
> > + if (nvbo->bo.ttm && nvbo->bo.ttm->caching_state == tt_cached)
>
> You don't want to do it also for tt_wc ?
>
No the point of using writecombined memory for BOs is to explicitly
avoid the need for this cache sync. An uncached MMIO read from the
device should already flush out all writecombining buffers and this read
is always happening when submitting a pushbuf.
> > + ttm_dma_tt_cache_sync_for_device((struct ttm_dma_tt *)nvbo->bo.ttm,
> > + &nouveau_bdev(nvbo->bo.ttm->bdev)->dev->pdev->dev);
> > +
> > ret = ttm_bo_validate(&nvbo->bo, &nvbo->placement,
> > interruptible, no_wait_gpu);
> > if (ret)
> > diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > index 830cb7b..f632b92 100644
> > --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> > +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> > @@ -901,6 +901,11 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data,
> > ret = ttm_bo_wait(&nvbo->bo, true, true, no_wait);
> > spin_unlock(&nvbo->bo.bdev->fence_lock);
> > drm_gem_object_unreference_unlocked(gem);
> > +
> > + if (!ret && nvbo->bo.ttm && nvbo->bo.ttm->caching_state == tt_cached)
>
> Ditto?
cpu_prep is used to make the kernel aware of a following userspace read.
Writecombined mappings are essentially uncached from the read
perspective.
>
> > + ttm_dma_tt_cache_sync_for_cpu((struct ttm_dma_tt *)nvbo->bo.ttm,
> > + &dev->pdev->dev);
> > +
> > return ret;
> > }
> >
> > --
> > 1.8.3.1
> >
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel at lists.freedesktop.org
> > http://lists.freedesktop.org/mailman/listinfo/dri-devel
More information about the Nouveau
mailing list