[Mesa-dev] Status update of XvMC on R600
sroland at vmware.com
Wed Nov 10 12:44:23 PST 2010
On 10.11.2010 20:31, Christian König wrote:
> Am Mittwoch, den 10.11.2010, 17:24 +0100 schrieb Roland Scheidegger:
>> On 10.11.2010 15:56, Christian König wrote:
>>> Am Montag, den 08.11.2010, 23:08 +0000 schrieb Andy Furniss:
>>>> Looking (for the first time) at iso13818-2 I think the chroma handling
>>>> would be part of display rather than decode, though the iso does specify
>>>> how chroma is laid out for fields in 184.108.40.206.
>>>> An article that describes the issues (it actually starts describing the
>>>> opposite problem of progressive treated as interlaced) is here.
>>> Thanks for the link. I understand the problem now, but can't figure out
>>> how to solve it without support for interlaced textures in the gallium
>>> driver. The hardware supports it, but neither gallium nor r600g have an
>>> interface for this support, and I have no intentions to defining one.
>> I'm curious here, what the heck exactly is an interlaced texture? What
>> does the hw do with this?
> It differs in the interpolation of samples. I will try to explain what I
> need for video decoding with an little example, lets say we have a 4x4
> A C B D
> E F G H
> I J K L
> M N O P
> And lets also say that the texture coordinates are in the range 0..3
> (not normalized), so if you fetch the sample at coordinate (0,0) you get
> "A", a fetch at (1, 0) gets "B", a fetch at (0,1) gets "E", and so on.
> But if you fetch a sample from coordinate (0.5, 0) you get a linear
> interpolation of "A" and "B" (depending on the sampler mode used).
> The tricky part comes when you fetch a sample from coordinate (0, 0.5),
> with a normal texture you would get a linear interpolation of "A" and
> "E", a fetch from (0, 1.5) would result in an interpolation from "E" and
> Now with an interlaced texture if we fetch from (0, 0.5) we get an
> interpolation of "A" and "I", and when we fetch from (0, 1.5) we get an
> interpolation of "E" and "M", and so on.
> It even gets more tricky since the decision of what to use is made on a
> block by block basis, so switching from one mode to the other must be
> fast, so we just can't copy around the lines or do something like this.
> I think it will probably end up in using more than on texture fetch in
> the fragment shader and calculate the linear interpolation on our own.
> If you have another god idea just let me know.
Ah so one texture contains data for both fields? I guess in this case
without hardware support you'd indeed need to do the linear blend on
your own (i.e. calculate 4 coordinates, use point sampling and blend -
or maybe instead of point sampling could use fetch4 functionality at
least). For 3d graphics though an interlaced texture would be an awkward
concept. I guess if more modern hw supports this we could add an
interface for it (and hope that hw actually agrees on how it works exactly).
More information about the mesa-dev