[IDEA] shrink xrender featureset
linuxhippy at gmail.com
Sun Nov 23 03:08:11 PST 2008
> Trapezoids for example would require implementing a rasteriser in shaders.
> Pretty much everything that doesn't get accelerated these days requires
> Tomorrow someone might come and ask for a different type of gradient, why
> even bother?
Well if you let me decide between software rendering on client or
software rendering on server, I would prefer the latter.
Furthermore, how would you like to generate aa geometry if not with
trapezoids - do you like to XPutImage the geometry to the server?
> Fallbacks are rarely efficient, iirc intel GEM maps memory with write
> combining, that isn't very friendly for readback.
For gradients you don't really need to do fallbacks, and for
trapezoids you can use a temporary mask.
This is all write-only, its just a matter how the driver/accaleration
architecture handle this.
> I intentionally brought this up before people actually implement this. The
> question is why not use opengl or whatever is available to do this? You're
> putting fixed stuff into a library that only hardware with flexible shaders
> can do, why not use something that just exposes this flexibility in the
> first place?
Well, first of all - because its already there...and except some not
so mature areas work quite well.
Second, Java has an OpenGL backend and currently, I am not sure wether
even the current NVidia drivers are able to run it and I am pretty
sure _none_ od the open drivers can.
I guess XRender has the adavantage that drivers are simpler to
implement compared to a ful-fledged OpenGL implementation.
Once OpenGL is stable and mature, and scalable enough to run dozens of
apps simultaneously, it should not be a problem to host XRender on top
More information about the xorg