New shatter development tree

Adam Jackson ajax at
Mon Nov 10 13:57:38 PST 2008

On Mon, 2008-11-10 at 20:27 +0000, Colin Guthrie wrote:
> Adam Jackson wrote:
> >
> > 
> > I'll be pushing updates here as I go.  Conceptual review, particularly
> > of the strategy documentation, is most welcome.
> > 
> > Note that this is an ABI change:
> > 
> >> @@ -478,8 +493,6 @@ typedef struct _Screen {
> >>      CloseScreenProcPtr CloseScreen;
> >>      QueryBestSizeProcPtr QueryBestSize;
> >>      SaveScreenProcPtr SaveScreen;
> >> -    GetImageProcPtr GetImage;
> >> -    GetSpansProcPtr GetSpans;
> >>      PointerNonInterestBoxProcPtr PointerNonInterestBox;
> >>      SourceValidateProcPtr SourceValidate;
> > 
> > So you will certainly need to rebuild your drivers if you expect to play
> > with this.  There's no particularly strong reason for this to be a
> > break, I suppose, but we really do have a lot of things that would be
> > pleasant to fix with an ABI break (render arg reduction, MAXFORMATS,
> > etc.).
> Forgive me for being dumb but what is shatter?
> I have read about this before but the details are fuzzy :)
> Is this something that will allow working around the 2048 DRI limit in 
> 945GMs or is it related to the dual-GPU stuff that lappy vendors love so 
> much these days? (or perhaps even both?)

The primary motivation is working around coordinate limits, yes.  The
idea is that if you have two CRTCs that can each scan 2k wide, right now
that implies a total width limit of 2k, because we force them both to
point to the same physically contiguous allocation.  If you could
somehow break apart the root window's pixmap, such that rendering to the
left half went to one piece and rendering to the right half went to the
other, then you could point one CRTC at each and be happy.

Thus, "shatter", to break into pieces (which I'll jargonize as "shards"
from here on in).

Internally to the server we more or less assume that all pixmaps (and by
extension, all windows) have a single allocation of pixels behind them.
You can fix this if you're careful, by creating some pixmaps with _no_
direct storage, attaching a bunch of shard pixmaps to them, and
intercepting rendering to the virtual pixmap and re-dispatching it
against the shards, translating as appropriate.  This works best when
you enforce a strict separation in your internal data structures between
the thing you're drawing on and the way in which you're drawing -
between the Drawable and the GC - because you need the ability to create
ephemeral rendering contexts but long-lived surfaces.  This requires a
bit of contortion to work in the face of Render and Xv, but it looks
doable.  Finally, extending this to DRI requires also shattering the
back buffer allocations and replaying display lists against each.  Right
now I'm just working on the 2d parts, but 3d is definitely in the plan.

This is all not dissimilar to the Xinerama problem, where you have
multiple GPUs that you want to dispatch rendering against.  Eventually I
do want to see the xinerama and shatter machinery merged, probably by
having one super ScreenRec that dispatches its shard rendering against
other ScreenRecs (as opposed to, say, RANDR shattering, where the shards
and the virtual are attached to the same ScreenRec).  So there's some
application to the switchable GPU machines too, since in principle the
xinerama layer need not dispatch against _both_ shards.

More technical details:

- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part
URL: <>

More information about the xorg mailing list