[PATCH] drm/fb-helper: Mark screen buffers in system memory with FB_VIRTFB

Daniel Vetter daniel at ffwll.ch
Fri Jan 28 15:58:37 UTC 2022


On Fri, Jan 28, 2022 at 12:36 PM Thomas Zimmermann <tzimmermann at suse.de> wrote:
>
> Hi
>
> Am 28.01.22 um 12:00 schrieb Daniel Vetter:
> > On Thu, Jan 27, 2022 at 4:18 PM Thomas Zimmermann <tzimmermann at suse.de> wrote:
> >>
> >> Hi
> >>
> >> Am 27.01.22 um 16:03 schrieb Daniel Vetter:
> >>> On Thu, Jan 27, 2022 at 12:58:30PM +0100, Thomas Zimmermann wrote:
> >>>> Hi
> >>>>
> >>>> Am 27.01.22 um 12:42 schrieb Daniel Vetter:
> >>>>> On Thu, Jan 27, 2022 at 11:26:21AM +0100, Thomas Zimmermann wrote:
> >>>>>> Mark screen buffers in system memory with FB_VIRTFB. Otherwise, the
> >>>>>> buffers are mmap'ed as I/O memory (i.e., VM_IO). For shadow buffers,
> >>>>>> also set the FB_READS_FAST hint.
> >>>>>
> >>>>> Maybe clarify that this only holds for the defio case, and since we have
> >>>>> our own shadow copy for that anyway it shouldn't matter. I'm also not sure
> >>>>> how much the memcpy gains us compared to just redrawing ...
> >>>>>
> >>>>> What's the motivation here, or just something you spotted?
> >>>>
> >>>> Correctness mostly. fbdev's fbdefio tests for (the absence of) this flag and
> >>>> sets VM_IO accordingly.
> >>>>
> >>>> It's actually for userspace. Maybe userspace tests these flags as well and
> >>>> can optimize memcpy pattern for different types of caching. But I wouldn't
> >>>> expect it TBH.
> >>>
> >>> Hm I thought so too, but the #define is in the internal header, not the
> >>> uapi header. And I don't see any ioctl code in fbmem.c that would shove
> >>> fb_info->flags to userspace. That's why I wondered why you care about
> >>> this? Or did I miss something somewhere?
> >>
> >> You didn't.  I just grepped it myself and the only user of VIRTFB is the
> >> mmap code in fb_deferio.c, which (not) sets VM_IO. READS_FAST is unused.
> >> I'd then set the former, but not the latter. Ok?
> >
> > Well READS_FAST might become used again, if/when the accel code is
>
> Ok.
>
> > back. So I'd more keep that part, and leave the VIRTFB one alone,
> > since you never set that for the defio case. I'm also not sure how
> > that even works, since defio relies on struct page being present
> > underneath, and you defo don't have struct page for VM_IO cases. So it
> > all looks rather fishy. Or I'm still massively misunderstanding it
> > all?
>
> We only set the VIRTFB flag if we use our internal shadow buffer, which
> is allocate via vzalloc() in drm_fb_helper_generic_probe(). Of course,
> the shadow buffer is regular memory and not an I/O range.
>
> The fbdefio on this buffer is completely implemented by the fbdev
> susbystem, which uses pages (i.e., no VM_PFNMAP flag). See
> https://elixir.bootlin.com/linux/latest/source/drivers/video/fbdev/core/fb_defio.c#L165
> for the respective mmap code.  Our GEM code never even knows that an
> mmap call has taken place. It just sees the occasional damage updates
> that fbdevio generates. Setting VIRTFB on the shadow buffer's memory is
> the right thing to do IMHO.

Oh dear I read that test inverted and thought if we do nothing, then
we wouldn't get VM_IO. Imo if you explain this in the commit message
(and maybe in a comment like "make sure defio mmap does not set VM_IO"
then this has my Reviewed-by: Daniel Vetter <daniel.vetter at ffwll.ch>
Also I guess this should have a cc: stable since I guess without it
it'll go boom on a bunch of the more obscure architectures ...

I'm not sure the 2nd part that sets VIRTFB for other parts makes sense
since mmap is fully under our control for that case. Imo drop it, but
also I'm ok if you keep it.

Also I guess yet another reason to just pull the defio stuff into our
fbdev emulation layer, because we're just fighting a questionable
midlayer here :-)
-Daniel

>
> Best regards
> Thomas
>
>
> > -Daniel
> >
> >>
> >> Best regards
> >> Thomas
> >>
> >>> -Daniel
> >>>
> >>>>
> >>>> Best regards
> >>>> Thomas
> >>>>
> >>>>> -Daniel
> >>>>>
> >>>>>>
> >>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann at suse.de>
> >>>>>> ---
> >>>>>>     drivers/gpu/drm/drm_fb_helper.c | 9 ++++++---
> >>>>>>     1 file changed, 6 insertions(+), 3 deletions(-)
> >>>>>>
> >>>>>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> >>>>>> index ed43b987d306..f15127a32f7a 100644
> >>>>>> --- a/drivers/gpu/drm/drm_fb_helper.c
> >>>>>> +++ b/drivers/gpu/drm/drm_fb_helper.c
> >>>>>> @@ -2346,6 +2346,7 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
> >>>>>>             fbi->fbops = &drm_fbdev_fb_ops;
> >>>>>>             fbi->screen_size = sizes->surface_height * fb->pitches[0];
> >>>>>>             fbi->fix.smem_len = fbi->screen_size;
> >>>>>> +  fbi->flags = FBINFO_DEFAULT;
> >>>>>>             drm_fb_helper_fill_info(fbi, fb_helper, sizes);
> >>>>>> @@ -2353,19 +2354,21 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
> >>>>>>                     fbi->screen_buffer = vzalloc(fbi->screen_size);
> >>>>>>                     if (!fbi->screen_buffer)
> >>>>>>                             return -ENOMEM;
> >>>>>> +          fbi->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST;
> >>>>>>                     fbi->fbdefio = &drm_fbdev_defio;
> >>>>>> -
> >>>>>>                     fb_deferred_io_init(fbi);
> >>>>>>             } else {
> >>>>>>                     /* buffer is mapped for HW framebuffer */
> >>>>>>                     ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
> >>>>>>                     if (ret)
> >>>>>>                             return ret;
> >>>>>> -          if (map.is_iomem)
> >>>>>> +          if (map.is_iomem) {
> >>>>>>                             fbi->screen_base = map.vaddr_iomem;
> >>>>>> -          else
> >>>>>> +          } else {
> >>>>>>                             fbi->screen_buffer = map.vaddr;
> >>>>>> +                  fbi->flags |= FBINFO_VIRTFB;
> >>>>>> +          }
> >>>>>>                     /*
> >>>>>>                      * Shamelessly leak the physical address to user-space. As
> >>>>>> --
> >>>>>> 2.34.1
> >>>>>>
> >>>>>
> >>>>
> >>>> --
> >>>> Thomas Zimmermann
> >>>> Graphics Driver Developer
> >>>> SUSE Software Solutions Germany GmbH
> >>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
> >>>> (HRB 36809, AG Nürnberg)
> >>>> Geschäftsführer: Ivo Totev
> >>>
> >>>
> >>>
> >>>
> >>
> >> --
> >> Thomas Zimmermann
> >> Graphics Driver Developer
> >> SUSE Software Solutions Germany GmbH
> >> Maxfeldstr. 5, 90409 Nürnberg, Germany
> >> (HRB 36809, AG Nürnberg)
> >> Geschäftsführer: Ivo Totev
> >
> >
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nürnberg, Germany
> (HRB 36809, AG Nürnberg)
> Geschäftsführer: Ivo Totev



-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


More information about the dri-devel mailing list