[Mesa-users] Internal hardware usage question

tom fogal tfogal at sci.utah.edu
Mon Sep 27 15:18:23 PDT 2010

Hi John,

"Biddiscombe, John A." <biddisco at cscs.ch> writes:
> If I implement current shaders with tweaks in GLSL, then they might
> work wi th future versions of mesa. (Previous attempts to use shaders
> with Mesa have proven to be unsatisfactory.) However it occurs to me
> that redoing the shaders as OpenCL may prove to be more useful as
> they will work using openGL/CL interop calls and in principle, openCL
> will compile them to make use of multiple cores (I'm planning ahead
> here for when the OpenCL compilers are any good).

You are planning far, far ahead ;)

> This way, I can do the shading in pure software in OpenCL, take
> advantage of any hardware acceleration on whichever platform I
> use, but keep mesa for the existing GL stuff that can't be easily
> replaced.
> My question/s is/are ....
> If I use OpenCL for a shader, will mesa cope with the GL/CL interop
> sharing of textures/memory etc.

Mesa has no support for OpenCL at present.  I vaguely remember a Mesa
developer mentioning he had a branch and fiddle here and there.

So, really, this question can't be answered; you can't get interop
between X and Y if there is no Y ;)

> Does mesa internally use any OpenCL-like functionality. It seems like
> there 's no point having openGL any more since the big machines won't
> support it in hardware (and if I use OpenCL, I'm opening myself to a
> world of pain getting the two to work well together).

There has been talk now and again about making Gallium suitable for

> If Mesa used an OpenCL layer internally to do parallel work it'd make
> sense - but I don't know anything about how mesa is structured.

It does not.

> Is anything parallelized internally in Mesa software rendering?

Not with swrast.  I don't think with softpipe, but don't know.
'llvmpipe' takes advantage of vector instructions and the like, I think
(I've not used it myself, yet).

Read up on gallium.  I think of it as the abstract base class of
modernish GPUs, which is probably offensively simple to the gallium
devs, but works I suppose.  I'd put OpenCL as an abstract class derived
from gallium.

> Suppose a vendor produces a machine with many C/GPUs inside, but with
> no X servers running. Can Mesa make use of these accelerators? (How
> do the drivers bind to the hardware). (I was always confused as to
> why mesa - which was supposed to be a software implementation of GL,
> had hardware drivers at all - have things gone full circle?)

Mesa has included hardware drivers for a long, long time now.

A user needs some way to access the driver.  Currently they do this via
an X context / OpenGL.  One could also do this via an EGL context /
OpenGL, and it fact current Mesas implement that.  I think the current
EGL code binds to an X context under the hood though.

Said another way: what you want isn't hard, per se, but doesn't
currently exist and I wouldn't bet on it.

> Any help appreciated. As you can see from the contents of this email,
> I suffer from some confusion about what mesa is supposed to be.

Getting much further into my personal opinions:

  . Wait a while on OpenCL.  NVIDIA's implementation is poor (see a
  semi-recent Oak Ridge paper comparing OpenCL with CUDA) and they have
  financial reasons to keep it that way.  I'm not sure AMD's binary
  driver even supports OpenCL yet (on Linux, and who cares otherwise,
  really?).  Mesa's support is not close, and the funding doesn't seem
  to be there to create a well-supported, performant OpenCL driver.

  . Modern supercomputers increasingly have GPUs.  TACC's Longhorn,
  Argonne's Eureka, LLNL's Gauss, etc.  I'm not sure how long this
  trend will continue, but it seems pretty fair to say that future
  peta- and exa-scale resources are not going to 'look' like current
  supercomputers, based on current large-scale architectures.  My
  opinion is that the CPU and GPU camps must merge; GPUs scale too
  well (power-wise) to not be used in future supercomputers.  I would
  bet that intel won't start shipping chips destined to be paired with
  nvidia chips, though; rather, we'll start seeing hardware which is
  GPU-like and APIs that force scaling in a similar manner (kernels,
  stream programming).

  . Switch to GLSL for rendering tasks; as I outline above, I don't
  think we'll have GLSL on exascale clusters, but it seems safe to say
  that the *model* GLSL imposes is going to be along for quite some
  time.  Such code should port more easily to next-gen systems.

  . Use OpenGL and complain at cluster admins that don't support
  running/starting X servers.  (Don't render directly; use an FBO.)

  . While things settle, use OpenMP for small scale (8, 16 core)
  non-graphical parallelism that can't use GLSL, and scale those
  building blocks using MPI.

Just my $0.02,


More information about the mesa-users mailing list