[Mesa-dev] Request for sponsered development...

Christian Weiß christian.weisz at spoc.at
Sun Mar 6 09:11:06 PST 2011


With some detours I finally arrived here.

I have a need out of a commercial project that may lead into some extensions within AIGLX and Mesa. I'm going to describe the problem and I'm open for whatever comes to your mind, even alternative approaches. If we (from a technical perspective you folks) come to the conclusion it's beneficial for either Mesa, AIGLX or both, I willing to sponsor the development by fully agreeing to the license concepts of the relevant projects.

Consider an installation of about ~600 low budget thin-clients (with almost no 3D support from the graphics chip) running as X-Terminals. Those thin-client stations are serviced by a host computer for 25-30 stations each. This infrastructure should be the basis of an architectural/interior planning system with serious demands in terms of 3D rendering. It's all clear that the client hardware will not be able to provide the power, so it comes down to a server-based rendering approach. Therefore some of ATI's or Nvidia's latest boards should be attached to the host computers forming a CUDA cluster for their terminals. So, the question is: how to get the image to the client? And, even not absolutely necessary, all this should be transparent to the application.

And that's the idea: the driver for the CUDA cluster comes from the vendor and should be used without any change. As a post-processing step within the pipeline (probably a specific fragment shader) compresses the content and writes it to an off-screen buffer (FBO). Now "something" out of either Mesa or AIGLX should take the buffer and transfers it by the way of the X-protocol to the X-Server (of the X-Terminal). At that end of the protocol we need a small piece of code that decompresses the buffer and copies it to the frame buffer of the relevant window. Ideally this will not place a burden on the CPU but is done within the GPU of the X-Terminals hardware. Again, maybe, a simple fragment shader which takes the compressed buffer as texture image. Even today's lowest-end hardware should provide that feature. Guess that should be supported by an GLX_extension from the underlying driver. To a certain extent, the latter stuff sounds much like what AIGLX is doing for desktop feature acceleration.

What I'm not so sure is, if

1) AIGLX provides the proper extensions to the X-protocol and
2) if the X-protocol is at all optimized for the rather considerable load of that data at high frame-rates.

Suggestions, even those which call me completely displaced here, are very welcome.

Cheers,


Christian.




More information about the mesa-dev mailing list