[RFC weston 13/16] compositor: Add a function to test if images transformed by a matrix should be bilinearly filtered
Bill Spitzak
spitzak at gmail.com
Tue Sep 30 14:24:49 PDT 2014
On 09/30/2014 12:35 PM, Derek Foreman wrote:
> Argh - thanks. Why isn't Z scale relevant? I'm worried about making
> assumptions about the transformations these matrices represent and
> having those assumptions violated in the future... For Z not to matter
> are we assuming projection will always be orthographic?
A projection matrix will have non-zero at [3,2] so that Z contributes to
the output W. The output Z is only used to set the Z in the depth buffer
(which I doubt you are using).
>> I recommend instead of checking the rotation to instead look for zeros
>> in the correct locations in the matrix. Matrix must be of the form:
>>
>> |sx 0 0 tx|
>> |0 sy 0 ty|
>> |? ? ? ?|
>> |0 0 0 1|
>>
>> or
>>
>> |0 sx 0 tx|
>> |sy 0 0 ty|
>> |? ? ? ?|
>> |0 0 0 1|
>>
>> sx and sy must be ±1, and tx and ty must be integers. The ? can be any
>> value.
>
> That could save us the very expensive matrix decomposition. I'll try
> this out. Thanks.
>
> I think this may be better than decomposition for deciding to use video
> planes in compositor-drm as well.
Most code just looks at the matrix for this. The Cairo and Pixman code
works this way, for instance.
> This is also used for the gl renderer, so I don't think I can count on
> that short circuit there... Though bilinear vs nearest doesn't have
> anywhere near the same performance impact there.
It may be worthwhile to check the Mesa code to see if it does this sort
of optimization when using software renderings.
I believe the hardware for bilinear sampling is so fast and optimized
that you do not save anything by asking for impulse. And I got some
indications that NVidia's driver does impulse sampling for any samples
sufficiently close to the center. This was in a floating point image,
where a math error of about 1/100 of a pixel still produced the expected
identity image. An equivalent OpenCL program mixed in 1/100 of the
neighboring pixel, which unfortunately had a very large floating point
value, so it made the math error visible. But the NVidia driver hid it.
This seems to indicate some optimization is done somewhere.
> Are you suggesting pixman always be set to use bilinear, and it'll just
> pick nearest automatically when the image transform doesn't require it?
I am currently working on pixman and cairo code, and both of them
already do tests (ie before any changes I am submitting) for this and
use nearest filter if possible. You can actually use the default "good"
filter all the time as it is the same as bilinear (for scaling up).
More information about the wayland-devel
mailing list