[PATCH] drm/ttm: pass buffer object for bind/unbind callback
Thomas Hellstrom
thellstrom at vmware.com
Sun Nov 20 01:30:01 PST 2011
On 11/19/2011 11:54 PM, Jerome Glisse wrote:
>
>> As mentioned previously, and in the discussion with Ben, the page tables
>> would not need to be rebuilt on each CS. They would be rebuilt only on the
>> first CS following a move_notify that caused a page table invalidation.
>>
>> move_notify:
>> if (is_incompatible(new_mem_type)) {
>> bo->page_tables_invalid = true;
>> invalidate_page_tables(bo);
>> }
>>
>> command_submission:
>> if (bo->page_tables_invalid) {
>> set_up_page_tables(bo);
>> bo->page_tables_invalid = false;
>> }
>>
> Why is it different from updating page table in move notify ? I don't
> see any bonus here, all the information we need are already available
> in move_notify.
>
>
I've iterated the pros of this approach at least two times before, but
for completeness let's do it again:
8<----------------------------------------------------------------------------------------------------
1) TTM doesn't need to care about the driver re-populating its GPU page
tables.
Since swapin is handled from the tt layer not the bo layer, this makes it a
bit easier on us.
2) Transition to page-faulted GPU virtual maps is straightforward and
consistent. A non-page-faulting driver sets up the maps at CS time, A
pagefaulting driver can set them up directly from an irq handler without
reserving, since the bo is properly fenced or pinned when the pagefault
happens.
3) A non-page-faulting driver knows at CS time exactly which
page-table-entries really do need populating, and can do this more
efficiently.
8<-----------------------------------------------------------------------------------------------------
And some extra items like partially populated TTMs that were mentioned
elsewhere.
>> Memory types in TTM are completely orthogonal to allowed GPU usage. The GPU
>> may access a bo if it's reserved, fenced or pinned, regardless of its
>> placement.
>>
>> A TT memory type is a *single* GPU aperture that may be mapped from the
>> aperture side by the CPU (AGP). It may also be used by a single unmappable
>> aperture that wants to use TTM's range management and eviction (vmwgfx GMR).
>> The driver can define more than one such memory type (psb), but a bo can
>> only be placed in one of those at a time, so this approach is unsuitable for
>> multiple apertures pointing to the same pages.
>>
> radeon virtual memory have a special address space, the system address
> space, it's managed by ttm through a TTM_TT (exact same code as
> current one). All the other address space are not managed by ttm but
> we require a bo to be bound to ttm_tt to be use, thought we can relax
> that. That's the reason why i consider system placement as different.
>
>
Yes for Radeon system memory may be different, and that's fine. But as
also previously mentioned we're trying to design a generic interface
here, in which we need to consider GPU- mappable system memory.
I think the pros of this interface design compared to populating in
bo_move are pretty well established, so can you please explain why you
keep arguing against it? What is it that I have missed?
/Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/dri-devel/attachments/20111120/38163bf1/attachment.html>
More information about the dri-devel
mailing list