Possible mouse mapping architecture quad-mesh -> quad-mesh transformations?
Keith Packard
keithp at keithp.com
Thu Jun 29 11:22:01 PDT 2006
On Thu, 2006-06-29 at 15:42 +0200, Matthias Hopf wrote:
> On Jun 28, 06 13:17:05 +0200, Keith Packard wrote:
> > So, my thought is that we construct a mapping from a quadrilateral
> > decomposition of the physical screen coordinate space to a quadrilateral
> > decomposition of the root window coordinate space.
>
> What kind of coordinate interpolation should be used here? A bilinear
> interpolation will not be accurate enough, if a large quadrilateral
> should be used to represent a perspective projected quad.
Yes, the obvious projective transformation is what I had in mind.
> Also everyone should be aware that there could be situations, where a
> correct mapping would require a dense quadrilateral field, but I assume
> for all reasonable use cases (i.e. the application can still be worked
> with, so that transformed mouse coordinates actually matter) the
> transformation can be estimated with a much smaller number of quads.
One possibility here is that when such a system does evolve, we can
extend the coordinate transformation mechanism to use a more
sophisticated representation than a simple quad patch; there are rather
a lot of possibilities here...
> > I think this can work for coordinate transformation; one question is
> > whether we'll need a separate data structure for hit detection; at
> > first glance, I think we will, but it seems like it can use another
> > quad mesh region structure of some kind.
>
> I don't think this is an issue on the API side, the server might need a
> different data structure for fast hit detection. But I assume this data
> can be extracted from the quad mesh.
I'm not so sure; just thinking of the looking glass angled windows makes
me wonder if a separate hit-detection quad mesh won't be necessary. I'm
afraid we'd need a reasonable formal proof that a single quad mesh is
sufficient; perhaps we can look into this soon. This seems like a key
blocker to moving forward.
Of course, I hope you're right; a single mesh would be a cleaner
representation.
Oh, I should also note that the input and output meshes needn't cover
only the root window space; we want to make it possible to perform
simple magnification where the input coordinates are mapped to a subset
of the output coordinates. I'm not sure about the reverse though; does
it make sense to map only a subset of the input space and 'clip' other
coordinates?
--
keith.packard at intel.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.x.org/archives/xorg/attachments/20060629/46361afa/attachment.pgp>
More information about the xorg
mailing list