[PATCH 2/2] drm/vram-helper: Alternate between bottom-up and top-down placement

Gerd Hoffmann kraxel at redhat.com
Thu Apr 23 13:57:09 UTC 2020


> > I don't think it is that simple.
> > 
> > First:  How will that interact with cursor bo allocations?  IIRC the
> > strategy for them is to allocate top-down, for similar reasons (avoid
> > small cursor bo allocs fragment vram memory).
> 
> In ast, 2 cursor BOs are allocated during driver initialization and kept
> permanently at the vram's top end. I don't know about other drivers.

One-time allocation at init time shouldn't be a problem.

> But cursor BOs are small, so they don't make much of a difference. What
> is needed is space for 2 primary framebuffers during pageflips, with one
> of them pinned. The other framebuffer can be located anywhere.

The problem isn't the size.  The problem is dynamically allocated cursor
BOs can also fragment vram, especially if top-bottom allocation is also
used for large framebuffers so cursor BOs can end up somewhere in the
middle of vram.

> > Second:  I think ttm will move bo's from vram to system only on memory
> > pressure.  So you can still end up with fragmented memory.  To make the
> > scheme with one fb @ top and one @ bottom work reliable you have to be
> > more aggressive on pushing out framebuffers.
> 
> I'm the process of converting mgag200 to atomic modesetting. The given
> example is what I observed. I'm not claiming that the placement scheme
> is perfect, but it is required to get mgag200 working with atomic
> modesetting's pageflip logic. So we're solving a real problem here.

I don't doubt this is a real problem.

> The bug comes from Weston's allocation strategy. Looking at the debug
> output:
> 
> >>   0x0000000000000000-0x000000000000057f: 1407: free
> 
> This was fbdev's framebuffer with 1600x900 at 32bpp
> 
> >>   0x000000000000057f-0x0000000000000b5b: 1500: used
> 
> This is Weston's framebuffer also with 1600x900 at 32bpp. But Weston
> allocates an additional, unused 60 scanlines. That is to render with
> tiles of 64x64px, I suppose. fbdev doesn't do that, hence Weston's
> second framebuffer doesn't fit into the free location of the fbdev
> framebuffer.

Sure.  Assume there is just enough vram to fit in fbdev and two weston
framebuffers.  fbdev is allocated from bottom, first weston fb from top,
second weston fb from bottom again.  fbdev is not pushed out (no memory
pressure yet) so the second weston fb ends up in the middle of vram
fragmenting things.  And now you are again in the situation where you
might have enough free vram for an allocation but can't use it due to
fragmention (probably harder to trigger in practice though).

That's why I would suggest to explicitly move out unpinned framebuffers
(aka fbdev) before pinning a new one (second weston fb) instead of
depending on ttm moving things out on OOM, to make sure you never
allocate something in the middle of vram.

> > Third:  I'd suggest make topdown allocations depending on current state
> > instead of simply alternating, i.e. if there is a pinned framebuffer @
> > offset 0, then go for top-down.
> 
> That's what the current patch does. If the last pin was at the bottom,
> the next goes to the top. And then the other way around. Without
> alternating between both end of vram, the problem would occur again when
> fragmentation happens near the top end.

I'd feel better when checking the state of my current pins to figure
whenever I should alloc top-bottom or not, for robustness reasons.

> Looking again at the vram helpers, this functionality could be
> implemented in drm_gem_vram_plane_helper_prepare_fb(). Drivers with
> other placement strategies could implement their own helper for prepare_fb.

vram helpers could also simply offer two prepare_fb variants.

cheers,
  Gerd



More information about the dri-devel mailing list