[PATCH 2/2] drm: Redefine pixel formats
Michel Dänzer
michel at daenzer.net
Thu Nov 17 06:00:17 PST 2011
On Don, 2011-11-17 at 15:06 +0200, Ville Syrjälä wrote:
> On Thu, Nov 17, 2011 at 08:52:05AM +0100, Michel Dänzer wrote:
> > On Mit, 2011-11-16 at 20:42 +0200, ville.syrjala at linux.intel.com wrote:
> > >
> > > Name the formats as DRM_FORMAT_X instead of DRM_FOURCC_X. Use consistent
> > > names, especially for the RGB formats. Component order and byte order are
> > > now strictly specified for each format.
> > >
> > > The RGB format naming follows a convention where the components names
> > > and sizes are listed from left to right, matching the order within a
> > > single pixel from most significant bit to least significant bit. Lower
> > > case letters are used when listing the components to improve
> > > readablility. I believe this convention matches the one used by pixman.
> >
> > The RGB formats are all defined in the CPU native byte order. But e.g.
> > pre-R600 Radeons can only scan out little endian formats. For the
> > framebuffer device, we use GPU byte swapping facilities to make the
> > pixels appear to the CPU in its native byte order, so these format
> > definitions make sense for that. But I'm not sure they make sense for
> > the KMS APIs, e.g. the userspace drivers don't use these facilities but
> > handle byte swapping themselves.
>
> Hmm. So who decides whether GPU byte swapping is needed when you eg.
> mmap() some buffer?
The userspace drivers.
--
Earthling Michel Dänzer | http://www.amd.com
Libre software enthusiast | Debian, X and DRI developer
More information about the dri-devel
mailing list