[PATCH 1/6] drm/ttm: add on_alloc_stage and reservation into ttm_operation_ctx

Christian König christian.koenig at amd.com
Thu Dec 14 09:04:15 UTC 2017

Am 14.12.2017 um 09:55 schrieb Thomas Hellstrom:
> Hi, Christian,
> On 12/14/2017 09:40 AM, Christian König wrote:
>> Hi Thomas,
>> sorry for that. Noted on the rest of that series as well that we need 
>> to improve the commit messages. But this one somehow slipped through 
>> because I discussed this change previously internally with Roger.
>> That made the change completely logical for me, but without this 
>> context everybody else just thinks "Hui what?". Going to keep that in 
>> mind the next time.
>> But back to topic: This series allows BOs which share the same 
>> reservation object as the BO currently allocated/validated to be 
>> evicted even when they are reserved.
>> This is useful because amdgpu wants to use a single reservation 
>> object for almost all BOs of a process.
> Yes, that indeed makes the whole thing more clear, and makes sense.
> Out of interest, is the shared reservation object usage a speed 
> optimization (avoiding the ww_mutex_locks at reservation time?)
> or something else?

Avoiding taking many ww_mutex_locks is one reason. The other major 
reason comes with GPU-VM page tables.

Just the same as CPU page tables multi level GPU page tables are 
allocated individually when needed. But during command submission we 
must make sure that all GPU page tables are validated and fences added 
to all of them.

Because of this we are using the reservation object of the root page 
directory (because that one is always allocated) as reservation object 
for all other page tables.

The effect is that you have to add the resulting fence of a command 
submission to only one reservation object and not a couple of hundreds 
or even thousands.

> I guess that even if LRU lists might get crowded with unevictable BOs, 
> iterating through those lists isn't really part of the fast path.

Yes, exactly. When we start to massively evict things performance goes 
down so much anyway that this extra cycling over the LRU doesn't hurt us 


> /Thomas
>> Regards,
>> Christian. 

More information about the amd-gfx mailing list