[PATCH] drm/ttm: pass buffer object for bind/unbind callback

Thomas Hellstrom thellstrom at vmware.com
Mon Nov 21 02:37:26 PST 2011


On 11/20/2011 04:13 PM, Jerome Glisse wrote:
> On Sun, Nov 20, 2011 at 4:30 AM, Thomas Hellstrom<thellstrom at vmware.com>  wrote:
>    
>> On 11/19/2011 11:54 PM, Jerome Glisse wrote:
>>
>> As mentioned previously, and in the discussion with Ben, the page tables
>> would not need to be rebuilt on each CS. They would be rebuilt only on the
>> first CS following a move_notify that caused a page table invalidation.
>>
>> move_notify:
>> if (is_incompatible(new_mem_type)) {
>>   bo->page_tables_invalid = true;
>>   invalidate_page_tables(bo);
>> }
>>
>> command_submission:
>> if (bo->page_tables_invalid) {
>>    set_up_page_tables(bo);
>>    bo->page_tables_invalid = false;
>> }
>>
>>
>> Why is it different from updating page table in move notify ? I don't
>> see any bonus here, all the information we need are already available
>> in move_notify.
>>
>>
>>
>> I've iterated the pros of this approach at least two times before, but for
>> completeness let's do it again:
>>
>> 8<----------------------------------------------------------------------------------------------------
>>
>> 1) TTM doesn't need to care about the driver re-populating its GPU page
>> tables.
>> Since swapin is handled from the tt layer not the bo layer, this makes it a
>> bit easier on us.
>> 2) Transition to page-faulted GPU virtual maps is straightforward and
>> consistent. A non-page-faulting driver sets up the maps at CS time, A
>> pagefaulting driver can set them up directly from an irq handler without
>> reserving, since the bo is properly fenced or pinned when the pagefault
>> happens.
>> 3) A non-page-faulting driver knows at CS time exactly which
>> page-table-entries really do need populating, and can do this more
>> efficiently.
>>
>> 8<-----------------------------------------------------------------------------------------------------
>>
>> And some extra items like partially populated TTMs that were mentioned
>> elsewhere.
>>      
> If done in move_notify i don't see why 1 would be different or 2.

Because to make the interface complete we need to support SYSTEM memory, 
and call move_notify from swapin, which I am not prepared to do.

>   I
> agree that in some case 3 is true. Given when move notify is call the
> ttm_tt is always fully populated at that point (only exception is in
> destroy path but it's a special on its own). If driver populate in
> move_notify is doesn't change anything from ttm pov.
>    

Then you put a restriction on TTM to *always* have populated TTMs which 
I am also not prepared to accept. It's been recently added as a 
performance optimization.

I won't spend any more time on this completely stupid argument. I've 
been asking you to make a minor change in order to get a complete and 
clean interface, and to get people to do the right thing in the future. 
You're obviously unwilling to do that.


/Thomas



More information about the dri-devel mailing list