[Intel-gfx] [PATCH v5 4/5] drm: Add library for shmem backed GEM objects

Rob Herring robh at kernel.org
Mon Jan 28 22:01:40 UTC 2019


On Mon, Jan 28, 2019 at 3:26 PM Noralf Trønnes <noralf at tronnes.org> wrote:
>
>
>
> Den 28.01.2019 21.57, skrev Rob Herring:
> > On Sun, Dec 2, 2018 at 9:59 AM Noralf Trønnes <noralf at tronnes.org> wrote:
> >>
> >>
> >> Den 30.11.2018 00.58, skrev Eric Anholt:
> >>> Daniel Vetter <daniel at ffwll.ch> writes:
> >>>
> >>>> On Wed, Nov 28, 2018 at 01:52:56PM -0800, Eric Anholt wrote:
> >>>>> Daniel Vetter <daniel at ffwll.ch> writes:
> >>>>>
> >>>>>> On Tue, Nov 27, 2018 at 12:38:44PM -0800, Eric Anholt wrote:
> >>>>>>> Daniel Vetter <daniel at ffwll.ch> writes:
> >>>>>>>
> >>>>>>>> On Mon, Nov 26, 2018 at 04:36:21PM -0800, Eric Anholt wrote:
> >>>>>>>>> Noralf Trønnes <noralf at tronnes.org> writes:
> >>>>>>>>>> +static void drm_gem_shmem_vm_close(struct vm_area_struct *vma)
> >>>>>>>>>> +{
> >>>>>>>>>> +      struct drm_gem_object *obj = vma->vm_private_data;
> >>>>>>>>>> +      struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> >>>>>>>>>> +
> >>>>>>>>>> +      drm_gem_shmem_put_pages(shmem);
> >>>>>>>>>> +      drm_gem_vm_close(vma);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>> +const struct vm_operations_struct drm_gem_shmem_vm_ops = {
> >>>>>>>>>> +      .fault = drm_gem_shmem_fault,
> >>>>>>>>>> +      .open = drm_gem_vm_open,
> >>>>>>>>>> +      .close = drm_gem_shmem_vm_close,
> >>>>>>>>>> +};
> >>>>>>>>>> +EXPORT_SYMBOL_GPL(drm_gem_shmem_vm_ops);
> >>>>>>>>> I just saw a warning from drm_gem_shmem_put_pages() for
> >>>>>>>>> !shmem->pages_use_count -- I think drm_gem_vm_open() needs to
> >>>>>>>>> drm_gem_shmem_get_pages().
> >>>>>>>> Yeah we need a drm_gem_shmem_vm_open here.
> >>>>>>> Adding one of those fixed my refcounting issues, so I've sent out a v6
> >>>>>>> with it.
> >>>>>> Just realized that I've reviewed this patch already, spotted that vma
> >>>>>> management issue there too. Plus a pile of other things. From reading that
> >>>>>> other thread discussion with Noralf concluded with "not yet ready for
> >>>>>> prime time" unfortunately :-/
> >>>>> I saw stuff about how it wasn't usable for SPI because SPI does weird
> >>>>> things with DMA mapping.  Was there something else?
> >>>> Looking through that mail it was a bunch of comments to improve the
> >>>> kerneldoc. Plus a note that buffer sharing/mmap is going to be all
> >>>> incoherent and horrible (but I guess for vkms we don't care that much).
> >>>> I'm just kinda vary of generic buffer handling that turns out to not be
> >>>> actually all that useful. We have lots of deadends and kinda-midlayers in
> >>>> this area already (but this one here definitely smells plenty better than
> >>>> lots of older ones).
> >>> FWIW, I really want shmem helpers for v3d.  The fault handling in
> >>> particular has magic I don't understand, and this is not my first fault
> >>> handler. :/
> >>
> >>
> >> If you can use it for a "real" hw driver like v3d, I think it makes a lot
> >> sense to have it as a helper. I believe udl and a future simpledrm can
> >> also make use of it.
> >
> > FWIW, I think etnaviv at least could use this too.
> >
> > I'm starting to look at panfrost and lima drivers and was trying to
> > figure out where to start with the GEM code. So I've been comparing
> > etnaviv, freedreno, and vgem implementations. They are all pretty
> > similar from what I see. The per driver GEM obj structs certainly are.
> > I can't bring myself to just copy etnaviv code over and do a
> > s/etnaviv/panfrost/. So searching around a bit, I ended up on this
> > thread. This seems to be what I need for panfrost (based on my brief
> > study).
> >
>
> I gave up on this due to problems with SPI DMA.
> Eric tried to use it with vkms, but it failed. On his blog he speculates
> that it might be due to cached CPU mappings:
> https://anholt.github.io/twivc4/2018/12/03/twiv/
>
> For tinydrm I wanted cached mappings, but it might not work that well
> with shmem. Maybe that's why I had problems with SPI DMA.

I think for most ARM systems, a cached mapping is not coherent. So
there will need to be cache flushes using the dma API. I see there's
been some discussion around that too.

> Maybe this change to drm_gem_shmem_mmap() is all it takes:
>
> -       vma->vm_page_prot = vm_get_page_prot(vma->vm_flags);
> +       vma->vm_page_prot = pgprot_writecombine(vm_get_page_prot(vma->vm_flags));

Seems like at least this part needs some flexibility. etnaviv,
freedreno, and omap at least all appear to support cached, uncached,
and writecombined based on flags in the GEM object.

> The memory subsystem is really complicated and I have kind of given up
> on trying to decipher it.

Yes. All the more reason to not let each driver figure out what to do.

Rob


More information about the Intel-gfx mailing list