[PATCH] drm/radeon: Clean up radeon_uvd_force_into_uvd_segment

Michel Dänzer michel at daenzer.net
Wed Oct 29 01:58:31 PDT 2014


On 28.10.2014 19:32, Christian König wrote:
> Am 28.10.2014 um 10:28 schrieb Michel Dänzer:
>> From: Michel Dänzer <michel.daenzer at amd.com>
>>
>> It was adding a second placement for the second 256MB segment of VRAM,
>> which is not a good idea for several reasons:
>>
>> * It fills up the first 256MB segment (which is also typically the CPU
>>    accessible part) of VRAM first, even for BOs which could go into the
>>    second 256MB segment. Only once there is no space in the first segment
>>    does it fall back to the second segment.
>> * It doesn't work with RADEON_GEM_NO_CPU_ACCESS BOs, which already use
>>    two VRAM placements.
>>
>> Change it to instead restrict the range for each VRAM placement. If the
>> BO can go into the second 256MB segment, set up the range to include
>> both segments, and set the TTM_PL_FLAG_TOPDOWN flag. That should result
>> in preferring the second segment for those BOs, falling back to the
>> first segment.
>>
>> Signed-off-by: Michel Dänzer <michel.daenzer at amd.com>
>
> I'm not sure if this will work correctly. Please keep in mind that even
> if BOs can be in the second segment they are not allowed to cross
> segment borders.
>
> E.g. if you just set lpfn = (2 * 256 * 1024 * 1024) >> PAGE_SHIFT it
> might happen that the first halve of a BO lands in the first 256MB
> segment and the second halve of a BO in the second 256MB segment.
>
> Have you considered that as well?

No, I wasn't aware of that restriction.

Looking at the current code again, it returns early if (allowed_domains 
== RADEON_GEM_DOMAIN_VRAM || rbo->placement.num_placement > 1). I think 
both of these conditions can only be false if allowed_domains == 
RADEON_GEM_DOMAIN_GTT, so can the second 256MB segment only be used for GTT?


-- 
Earthling Michel Dänzer            |                  http://www.amd.com
Libre software enthusiast          |                Mesa and X developer


More information about the dri-devel mailing list