[PATCH v14 3/8] drm/ttm/pool: Provide a helper to shrink pages

Thomas Hellström thomas.hellstrom at linux.intel.com
Wed Dec 18 10:07:13 UTC 2024


On Wed, 2024-12-04 at 12:24 +0100, Christian König wrote:
> Am 04.12.24 um 12:09 schrieb Thomas Hellström:
> > [SNIP]
> 
> > > > > BTW I really dislike that tt->restore is allocated
> > > > > dynamically.
> > > > > That
> > > > > is
> > > > > just another allocation which can cause problems.
> > > > > We should probably have all the state necessary for the
> > > > > operation
> > > > > in
> > > > > the
> > > > > TT object.
> > > > Initially it was done this way. But that meant a pre-allocated
> > > > struct
> > > > page-pointer array the of 1 << MAX_PAGE_ORDER size (2MiB) for
> > > > each
> > > > ttm_tt. That lead to a patch to reduce the MAX_PAGE_ORDER to
> > > > PMD
> > > > size
> > > > order, but  as you might remember, that needed to be ripped out
> > > > because
> > > > the PMD size macros aren't constant across all architectures.
> > > > IIRC
> > > > it
> > > > was ARM causing compilation failures, and Linus wasn't happy.
> > > Yeah, I do remember that. But I don't fully get why you need this
> > > page-pointer array in the first place?
> > So the TTM page-pointer array holds the backup handles when backed
> > up.
> > During recovery, We allocate a (potentially huge) page and populate
> > the
> > TTM page-pointer array with pointers into that. Meanwhile we need
> > to
> > keep the backup handles for the recover phase in the restore
> > structure,
> > and in the middle of the recover phase you might hit an -EINTR.
> 
> I still don't see the problem to be honest.
> 
> What you basically do on recovery is the following:
> 1. Allocate a bunch of contiguous memory of order X.
> 2. Take the first entry from the page_array, convert that to your
> backup 
> handle and copy the data back into the just allocated contiguous
> memory.
> 3. Replace the first entry in the page array with the struct page 
> pointer of the allocated contiguous memory.
> 4. Take the next entry from the page_array, convert that to your
> backup 
> handle and copy the data back into the just allocated contiguous
> memory.
> 5. Replace the next entry in the page_array with the struct page
> pointer 
> + 1 of the allocated contiguous memory.
> 6. Repeat until the contiguous memory is fully recovered and we jump
> to 
> 1 again.

OK so the reason I skipped this previously was apparently because of
inconsistencies, since the the dma_addr array is fully populated it
would look awkward if pages array were only partly populated.

But I reworked this in the latest version to follow the above so now
both are populated once the whole new multi-order page is successfully
read back from backup. The patch becomes bigger, and I also added a
restructuring patch, but OTOH some of the additional additions is
documentation.

The (now small) kmalloc at start of restore is still present, though. I
figure if that fails, restore would fail anyway so it shouldn't be an
issue whatsoever.

On an unrelated issue,
I notice that HighMem pages are skipping cache-transitioning. That
makes sense since they don't have a kernel linear map, but content
added using writes to other cached maps (page clearing, restore, resume
from hibernation) might still remain in a PIPT cache, right? Don't we
need to clflush these pages on wb->wc transition and ensure any resume-
from-hibernation content is similarly flushed?

/Thomas


> 
> What exactly do you need this pre-allocated struct page-pointer array
> of 
> 1 << MAX_PAGE_ORDER for?
> 
> Sorry, I must really be missing something here.
> 
> Regards,
> Christian.
> 
> > 
> > Thanks,
> > Thomas



More information about the Intel-xe mailing list