[Mesa-dev] nv3x xfce4 compositing issue, making good progress, need help / input
Hans de Goede
hdegoede at redhat.com
Mon Sep 28 07:14:20 PDT 2015
Hi,
As always thanks for your input :)
On 25-09-15 17:21, Ilia Mirkin wrote:
> On Fri, Sep 25, 2015 at 10:34 AM, Hans de Goede <hdegoede at redhat.com> wrote:
>> Hi,
>>
>> On 11-09-15 18:48, Ilia Mirkin wrote:
>>>
>>> On Fri, Sep 11, 2015 at 10:46 AM, Hans de Goede <hdegoede at redhat.com>
>>> wrote:
>>>>
>>>> Hi,
>>>>
>>>> I've been working on trying to fix this one:
>>>>
>>>> https://bugs.freedesktop.org/show_bug.cgi?id=90871
>>>>
>>>> And today I've more or less root caused this, it seems
>>>> that some code is making glTexImage2D calls with npot
>>>> width / height, which fails on nv3x (where as it works
>>>> on nv4x).
>>>>
>>>> The bug has a simple reproducer attached, but that is
>>>> not directly calling glTexImage2D, so it seems that
>>>> the npot values are coming from some helper library
>>>> used (glXBindTexImageEXT ?).
>>>>
>>>> 2 questions:
>>>>
>>>> 1) Does anyone know / suspect where the glTexImage2D call
>>>> is originating from (see the test-program attachment
>>>> in bugzilla.
>>>>
>>>> 2) Is this a bug in glXBindTexImageEXT (assuming that is
>>>> the culprit), or should the test program take into account
>>>> that the card does not support npot when calling this ?
>>>
>>>
>>> Without directly answering your questions (as I don't know the
>>> answers), without NPOT support (which nv3x doesn't have), you can only
>>> use non-power-of-two textures with GL_TEXTURE_RECTANGLE, not
>>> GL_TEXTURE_2D. The program that you have does appear to detect this
>>> though, and uses the rect target if ARB_texture_rectangle is
>>> available, which it should be. I guess it should just bail if both
>>> ARB_texture_rectangle and ARB_texture_non_power_of_two aren't
>>> available...
>>
>>
>> I've been working on getting to the bottom of this one. The NPOT problem
>> is only part of the story (and can be worked around I believe)
>>
>> The real problem seems to be that nv3x cards only support swizzled textures
>> and not linear ones. This makes it sorta hard to do texture-from-pixmap.
>> Specifically when trying to use glXBindTexImageEXT with a local client we
>> end up in gallium/state_trackers/dri/dri_drawable.c: dri_set_tex_buffer2()
>> which calls nv30_miptree_from_handle() on the first call on a certain pixmap
>> and is a nop after that.
>> dri_set_tex_buffer2() does call drawable->update_tex_buffer() every time but
>> that currently is a nop
>>
>> There are 2 possible solutions here:
>> 1) Make nv30_miptree_from_handle() create a new bo rather then calling
>> bo_from_handle() when called to create a texture (rather then a front /
>> back buffer), and override the default nop drawable->update_tex_buffer()
>> with a function calling nv30_transfer_rect to copy the linear pixmap
>> data into the swizzled texture we've newly allocated
>> 2) Solution one involves a blit, so is not going to be very fast, so
>> alternatively we could simply stop claiming that
>> GLX_EXT_texture_from_pixmap
>> is supported on nv3x (it does work on nv4x already as that has support
>> for linear textures)
>
> Looking at the spec, it actually looks like you can drop
> GLX_TEXTURE_2D_BIT to make 2D textures go away, and mark them all RECT
> in bo_from_handle() [for nv3x only of course].
Erm, unless I'm mistaken nv3x does not do RECT at all since it does
not support NPOT textures ...
We actually end up using TEXTURE_2D on nv3c because of this, on nv4x
which does support RECT there is no issue.
So simply disabling glXBindTexImageEXT seems for the best, but while
looking at a way to disable glXBindTexImageEXT I noticed that
src/gallium/state_trackers/glx/xlib/glx_api.c currently does not seem
to have any card specific code at all, it seems that all gallium drivers
support the set of glx extensions, not sure if we want to break that.
Which would bring us to implementing solution 1.
> Now
> src/gallium/state_trackers/glx/xlib/glx_api.c always includes this
> bit, but it should be easy to remove this if you don't have NPOT
> texture support, or based on some PIPE_CAP (should probably check with
> the r300 folk for how the ati ddx handles this). Of course this
> approach might just be asking for trouble from non-conformant
> compositors which don't check if it's ending up as a 2D or RECT
> texture.
>
> 3) Change the DDX to swizzle POT-sized pixmaps. From the spec:
>
> - If only GL_ARB_texture_rectangle is supported GL_TEXTURE_2D will
> be used for all power of two pixmap sizes and GL_TEXTURE_RECTANGLE_ARB
> will be used for all non power of two pixmap sizes.
But AFAIK nv3x does not support GL_TEXTURE_RECTANGLE_ARB, so this
is not applicable.
> This however might end up a bit problematic since you can't apriori
> know whether you'll have to scan out a pixmap (but I'd like to see a
> POT-sized monitor), and also all the fragment programs probably assume
> RECT coordinates.
>
>
>>
>> So Ilia, with your mesa nouveau maintainer hat on, which solution shall we
>> implement?
>
> Since the "we" is really "you", it's really your call. It sounds like
> the main users of GLX_EXT_texture_from_pixmap are compositors... AFAIK
> most GL-based compositors require GL2 anyways, so this is probably not
> that big of an issue. I'd just drop that bit, or the whole ext. But if
> you want to try some of the other solutions, that's fine with me as
> well :)
Right, so just disabling GLX_EXT_texture_from_pixmap was my first preference
as well, also because of the limited amount of VRAM on nv3x cards (usually
only 64 MB), but as said above it seems that the gallium glx code assumes
a single set of glx extensions for all supported cards, we can probably change
that, but given this I think it may be better to first give option 1 a try.
Regards,
Hans
More information about the mesa-dev
mailing list