[PATCH] drm/ttm: fix two regressions since move_notify changes

Thomas Hellstrom thomas at shipmail.org
Wed Jan 25 09:15:20 PST 2012


On 01/25/2012 04:37 PM, Jerome Glisse wrote:
> On Wed, Jan 25, 2012 at 10:21 AM, Ben Skeggs<skeggsb at gmail.com>  wrote:
>> On Wed, 2012-01-25 at 15:33 +0100, Thomas Hellstrom wrote:
>>> On 01/25/2012 10:41 AM, Ben Skeggs wrote:
>>>> My main concern is that we blindly and unnecessarily set up GPU bindings and
>>>> end up with unnecessary code in TTM, and furthermore that we communicate
>>>> that bad practice to future driver writers.
>>>> This "unnecessary code" is like 5 lines of cleanup if something fails,
>>>> hardly anything to be jumping up and down about :)
>>> It's just not TTM's business, unless the GPU maps are mappable by the
>>> CPU as well.
>>> Also, What if the mapping setup in move_notify() fails?
>> It can't fail, and well, in nouveau's implementation it never will.
>> It's simply a "fill the ptes for all the vmas currently associated with
>> a bo".
>>
>> And well, it's about as much TTM's business as VRAM aperture allocation
>> is..  I don't see the big deal, if you wan't to do it a different way in
>> your driver, there's nothing stopping you.  It's a lot of bother for
>> essentially zero effort in TTM..

I think you fail to see the point.

TTM was written for placing data facilitating for GPU maps, and handle 
the CPU map implications.
Sometimes the placement can be used as direct GPU maps (legacy hardware 
TT, VRAM), sometimes not.
To me it's natural any driver private GPU maps, (just like CPU maps) are 
taken down before move, and
setup again on demand after a move.

Any driver could theoretically add a hook to facilitate a special need, 
but we shouldn't do
that without thinking of the implications for other drivers. What if 
other drivers *can* fail
when setting up private GPU maps? What if they need to set up private 
GPU maps to system memory?
What if they started out by copying Nouveau / Radeon code. Suddenly we'd 
need a return value from
move_notify(). We'd need to handle double failure if move_notify() fails 
when move had already failed.
We'd need the same for swap_notify(). Should we add a swapin_notify() as 
well to set up the
system memory GPU maps once memory is swapped back? What should we do if 
swapin_notify() fails?
It would be an utter mess.

These things are what I need to think about if we require TTM to notify 
the driver when private
GPU maps potential need to be set up, and I think the simple answer is: 
let the driver set up the GPU maps
when it needs to.


>> Yes, right.  That can be done, and gives exactly the same functionality
>> as I *already* achieve with move_notify() but apparently have to change
>> just because you've decided nouveau/radeon are both doing the
>> WrongThing(tm).

Well the point *is* that it's more or less the exact same functionality, 
since you stated that an implementation
might be inefficient and difficult.

Also I don't ask you to change this. I ask you to *not block* a change 
if I want to clean this up to avoid having
to deal with the above mess at a later date.

>>
>> Anyhow, I care more that 3.3 works than how it works.  So, whatever.  If
>> I must agree to this in order to get a regression fix in, then I guess I
>> really have no choice in the matter.
>>
>> Ben.
>>
>>
Well, I'm in the same situation.
I have to accept this otherwise Radeon and Nouveau will regress, and 
there's no time to do it differently.

/Thomas





More information about the dri-devel mailing list