HPC (High Performance Compute) Architecture

Michal Suchanek hramrach at centrum.cz
Fri Mar 18 02:35:32 PDT 2011


On 18 March 2011 04:14, Trevour Crow <crowtr at yahoo.com> wrote:
>> From: Josh Leverette <coder543 at gmail.com>
>> Subject: Re: HPC (High Performance Compute) Architecture
>> To: "jonsmirl at gmail.com" <jonsmirl at gmail.com>
>> Cc: "wayland-devel at lists.freedesktop.org" <wayland-devel at lists.freedesktop.org>
>> Date: Thursday, March 17, 2011, 9:13 PM
>> http://www.onlive.com ? But yeah, I was
>> wanting this to be user transparent for all applications,
>> since there is no way we could modify proprietary
>> applications that use a lot of processor real estate and
>> this would be a one time deal, no need to do it on an app by
>> app basis. But, I understand.
> I'm actually working with Gallium to make this possible - after discovering
> that indirect mode GLX was limited to OpenGL 1.4(or at least the Mesa/X.Org
> stack is, which is all I care about), I decided to see if I couldn't use
> the pipe/stack tracker architecture to transparently relay rendering
> commands from one machine to another; I haven't quite started work on how
> "netpipe" will connect to the remote state tracker, but I've started laying down the pipe driver side and it seems possible. This means that, combined
> with the remote Wayland GSoC project, what you're talking about should be
> possible for any program that ultimately renders using Gallium.

It's generally possible. Gallium removes one obstacle - it provides a
middleware that is the same regardless of the graphics that is used
for rendering. You will not be able to move an application from a more
powerful graphics to a graphics with substantially fewer features
available but that's it, Intel, ATI, nVidia, VMWare, all should work
if  you limit the features the application can use to those of your
least capable card.

The other issue is texture upload. Often 3D (or OpenGL) applications
use huge amounts of textures and require that these be uploaded to the
card quickly. You can present an unified netpipe model where both the
card memory and the system memory is presented as one huge netpipe
device memory and cache as many textures as you can on the remote end
but this won't help for more intensive applications that switch
between parts of complex scene (and swap the textures and models
accordingly) or things like MPlayer GL video output.

Relying on a powerful CPU/GPU to compress the rendered graphics into a
video stream might be a more general solution. It has the advantage of
decoupling the rendering and display. When there is fast scene change
you may get temporary compression artifacts in the video stream but
the rendering is not affected, and applications with poor event loops
that only check for input between rendering frames (most,
unfortunately) remain responsive. It also blends naturally with
Wayland protocol that only supports window pixmap updates.

Thanks

Michal


More information about the wayland-devel mailing list