<!DOCTYPE html><html><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
[SNIP]<span style="white-space: pre-wrap"> </span>
<blockquote type="cite" cite="mid:73588525571a68f5638300ef171591de10ba5e5d.camel@linux.intel.com">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">@@ -453,9 +601,36 @@ int ttm_pool_alloc(struct ttm_pool *pool,
struct ttm_tt *tt,
else
gfp_flags |= GFP_HIGHUSER;
- for (order = min_t(unsigned int, MAX_PAGE_ORDER,
__fls(num_pages));
- num_pages;
- order = min_t(unsigned int, order,
__fls(num_pages)))
{
+ order = min_t(unsigned int, MAX_PAGE_ORDER,
__fls(num_pages));
+
+ if (tt->page_flags & TTM_TT_FLAG_PRIV_BACKED_UP) {
+ if (!tt->restore) {
+ gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
+
+ if (ctx->gfp_retry_mayfail)
+ gfp |= __GFP_RETRY_MAYFAIL;
+
+ tt->restore =
+ kvzalloc(struct_size(tt-
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">restore,
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">old_pages,
+ (size_t)1
<<
order), gfp);
+ if (!tt->restore)
+ return -ENOMEM;
+ } else if (ttm_pool_restore_valid(tt-
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">restore)) {
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">+ struct ttm_pool_tt_restore *restore =
tt-
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">restore;
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">+
+ num_pages -= restore->alloced_pages;
+ order = min_t(unsigned int, order,
__fls(num_pages));
+ pages += restore->alloced_pages;
+ r = ttm_pool_restore_tt(restore, tt-
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">backup, ctx);
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">+ if (r)
+ return r;
+ caching = restore->caching_divide;
+ }
+
+ tt->restore->pool = pool;
+ }
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">Hui? Why is that part of the allocation function now?
At bare minimum I would expect that this is a new function.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">It's because we now have partially backed up tts, so the restore is
interleaved on a per-page basis, replacing the backup handles with
page-pointers. I'll see if I can separate out at least the
initialization here.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
Yeah, that kind of makes sense.
My expectation was just that we now have explicit ttm_pool_swapout()
and
ttm_pool_swapin() functions.
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
I fully understand, although in the allocation step, that would also
increase the memory pressure since we might momentarily have twice the
bo-size allocated, if the shmem object was never swapped out, and we
don't want to unnecessarily risc OOM at recover time, although that
should be a recoverable situation now. If the OOM receiver can free up
system memory resources they can could potentially restart the recover.</pre>
</blockquote>
<br>
What I meant was more that we have ttm_pool_swapout() which does a
mix of moving each page to a swap backend and freeing one by one.<br>
<br>
And ttm_pool_swapin() which allocates a bit of memory (usually one
huge page) and then copies the content back in from the swap
backend.<br>
<br>
Alternatively we could rename ttm_pool_alloc() into something like
ttm_pool_populate() and ttm_pool_free() into ttm_pool_unpopulate(),
but those names are not very descriptive either.<br>
<br>
It's just that we now do a bit more than just alloc and free in
those functions, so the naming doesn't really match that well any
more.<br>
<br>
Christian.<br>
<br>
<blockquote type="cite" cite="mid:73588525571a68f5638300ef171591de10ba5e5d.camel@linux.intel.com">
<pre class="moz-quote-pre" wrap="">
/Thomas
</pre>
</blockquote>
</body>
</html>