[PATCH] drm/prime: Ensure mmap offset is initialized
Daniel Vetter
daniel at ffwll.ch
Mon May 30 14:48:56 UTC 2022
On Mon, 30 May 2022 at 15:54, Rob Clark <robdclark at gmail.com> wrote:
>
> On Mon, May 30, 2022 at 12:26 AM Thomas Zimmermann <tzimmermann at suse.de> wrote:
> >
> > Hi
> >
> > Am 29.05.22 um 18:29 schrieb Rob Clark:
> > > From: Rob Clark <robdclark at chromium.org>
> > >
> > > If a GEM object is allocated, and then exported as a dma-buf fd which is
> > > mmap'd before or without the GEM buffer being directly mmap'd, the
> > > vma_node could be unitialized. This leads to a situation where the CPU
> > > mapping is not correctly torn down in drm_vma_node_unmap().
> >
> > Which drivers are affected by this problem?
> >
> > I checked several drivers and most appear to be initializing the offset
> > during object construction, such as GEM SHMEM. [1] TTM-based drivers
> > also seem unaffected. [2]
> >
> > From a quick grep, only etnaviv, msm and omapdrm appear to be affected?
> > They only seem to run drm_gem_create_mmap_offset() from their
> > ioctl-handling code.
> >
> > If so, I'd say it's preferable to fix these drivers and put a
> > drm_WARN_ONCE() into drm_gem_prime_mmap().
>
> That is good if fewer drivers are affected, however I disagree with
> your proposal. At least for freedreno userspace, a lot of bo's never
> get mmap'd (either directly of via dmabuf), so we should not be
> allocating a mmap offset unnecessarily.
Does this actually matter in the grand scheme of things? We originally
allocated mmap offset only on demand because userspace only had 32bit
loff_t support and so simply couldn't mmap anything if the offset
ended up above 32bit (even if there was still va space available).
But those days are long gone (about 10 years or so) and the allocation
overhead for an mmap offset is tiny. So I think unless you can
benchmark an impact allocating it at bo alloc seems like the simplest
design overall, and hence what we should be doing. And if the vma
offset allocation every gets too slow due to fragmentation we can lift
the hole tree from i915 into drm_mm and the job should be done. At
that point we could also allocate the offset unconditionally in the
gem_init function and be done with it.
Iow I concur with Thomas here, unless there's hard data contrary
simplicity imo trumps here.
-Daniel
>
> BR,
> -R
>
> > Best regards
> > Thomas
> >
> > [1]
> > https://elixir.bootlin.com/linux/v5.18/source/drivers/gpu/drm/drm_gem_shmem_helper.c#L85
> > [2]
> > https://elixir.bootlin.com/linux/v5.18/source/drivers/gpu/drm/ttm/ttm_bo.c#L1002
> >
> > >
> > > Fixes: e5516553999f ("drm: call drm_gem_object_funcs.mmap with fake offset")
> > > Signed-off-by: Rob Clark <robdclark at chromium.org>
> > > ---
> > > Note, it's possible the issue existed in some related form prior to the
> > > commit tagged with Fixes.
> > >
> > > drivers/gpu/drm/drm_prime.c | 5 +++++
> > > 1 file changed, 5 insertions(+)
> > >
> > > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> > > index e3f09f18110c..849eea154dfc 100644
> > > --- a/drivers/gpu/drm/drm_prime.c
> > > +++ b/drivers/gpu/drm/drm_prime.c
> > > @@ -716,6 +716,11 @@ int drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> > > struct file *fil;
> > > int ret;
> > >
> > > + /* Ensure that the vma_node is initialized: */
> > > + ret = drm_gem_create_mmap_offset(obj);
> > > + if (ret)
> > > + return ret;
> > > +
> > > /* Add the fake offset */
> > > vma->vm_pgoff += drm_vma_node_start(&obj->vma_node);
> > >
> >
> > --
> > Thomas Zimmermann
> > Graphics Driver Developer
> > SUSE Software Solutions Germany GmbH
> > Maxfeldstr. 5, 90409 Nürnberg, Germany
> > (HRB 36809, AG Nürnberg)
> > Geschäftsführer: Ivo Totev
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the dri-devel
mailing list