[Intel-gfx] [PATCH 3/5] drm/i915: fall through pwrite_gtt_slow to the shmem slow path
Daniel Vetter
daniel at ffwll.ch
Tue Sep 20 13:35:56 CEST 2011
On Sat, Sep 17, 2011 at 09:57:05PM +0100, Chris Wilson wrote:
> On Sat, 17 Sep 2011 20:55:47 +0200, Daniel Vetter <daniel.vetter at ffwll.ch> wrote:
> > The gtt_pwrite slowpath grabs the userspace memory with
> > get_user_pages. This will not work for non-page backed memory, like a
> > gtt mmapped gem object. Hence fall throuh to the shmem paths if we hit
> > -EFAULT in the gtt paths.
> >
> > Now the shmem paths have exactly the same problem, but this way we
> > only need to rearrange the code in one write path.
> >
> > Signed-Off-by: Daniel Vetter <daniel.vetter at ffwll.ch>
> > ---
> > drivers/gpu/drm/i915/i915_gem.c | 24 +++++++++++++++---------
> > 1 files changed, 15 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index e0475ca..9c28d48 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -1018,18 +1018,24 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
> >
> > out_unpin:
> > i915_gem_object_unpin(obj);
> > - } else {
> > - ret = i915_gem_object_set_to_cpu_domain(obj, 1);
> > - if (ret)
> > - goto out;
> >
> > - ret = -EFAULT;
> > - if (!i915_gem_object_needs_bit17_swizzle(obj))
> > - ret = i915_gem_shmem_pwrite_fast(dev, obj, args, file);
> > - if (ret == -EFAULT)
> > - ret = i915_gem_shmem_pwrite_slow(dev, obj, args, file);
> > + if (ret != -EFAULT)
> > + goto out;
> > + /* Fall through to the shmfs paths because the gtt paths might
> > + * fail with non-page-backed user pointers (e.g. gtt mappings
> > + * when moving data between textures). */
> > }
>
> I'd actually prefer to have an
> if (ret == -EFAULT) {
> and comments here in order to mark the shmem paths as a logical unit to the
> reader. Better yet would be to move the two paths (GTT/CPU) into their own
> routines:
>
> ret = -EFAULT;
> if (can_gtt)
> ret = i915_gem_gtt_pwrite();
> /* Fallback to the slow shmem paths if we need to handle
> * GTT mapped user pointers, or otherwise can not do the upload in
> * place.
> */
> if (ret == -EFAULT)
> ret = i915_gem_cpu_pwrite();
With my full set of pwrite/pread patches, this is essentially what the
code will look like (with the minor addition to correctly handle
phys_objects, which I've fumbled in this patch here).
> And whilst you are here, can you incorporate this patch?
> else if (obj->gtt_space &&
> + obj->map_and_fenceable &&
> obj->base.write_domain != I915_GEM_DOMAIN_CPU) {
> ret = i915_gem_object_pin(obj, 0, true);
Not sure this is good. On !llc machines, gtt_pwrite is much faster, so we
want to move the objects into the mappable part of the gtt to benefit from
that. Without this, they'll just stay wherever they are.
Otoh our current allocator doesn't even try to seperate mappable and
!mappable allocations, so there's some decent chance they unnecessarily
fight over each another. I'll fix up the quick hack I've sent you to
further test this.
-Daniel
--
Daniel Vetter
Mail: daniel at ffwll.ch
Mobile: +41 (0)79 365 57 48
More information about the Intel-gfx
mailing list