OpenGL in USB Display Devices under wayland
Pekka Paalanen
ppaalanen at gmail.com
Fri Jun 22 12:29:04 PDT 2012
On Fri, 22 Jun 2012 14:08:16 -0400
rektide <rektide at voodoowarez.com> wrote:
> On Fri, Jun 22, 2012 at 01:14:00PM -0400, Casey Dahlin wrote:
> > On Fri, Jun 22, 2012 at 09:40:43PM +0530, Sannu K wrote:
> > > After seeing the changes made in X server for offloading hardware
> > > acceleration for USB display devices (displaylink and others), I am curious
> > > to know about how Wayland takes care (if at all it does) of offloading
> > > hardware acceleration to primary GPU. X server uses primary GPU (intel or
> > > nvidia or ati) for 3D acceleration and just sends the scan out buffer to
> > > USB display device. As far as I went through the architecture of wayland I
> > > am not able to find where this fits in and how. Please provide some info on
> > > how this is (or will be) done.
...
> How about this question: might Weston be adaptable to serve this use case? What would be the
> major changes to Weston to do this? What other subsystems would have to change?
Hi,
this is actually a kernel feature you are after, and also related to the
dmabuf work that has been going on recently, AFAIU.
Weston uses Mesa and the EGL GBM platform to render the composite. This
produces a GBM buffer, which can somehow be given to the DRM modesetting
API for scanout. On Weston's side, this would simply mean just rendering
another buffer for the USB output, and giving it to DRM for USB scanout.
That is the theory to my understanding.
The trick is in the kernel DRM, which has to be able to deal with the
buffer and actually push it to the USB device. This may require special
allocation flags or somehow using the dmabuf infrastructure to allocate
the buffer, where the composite is drawn.
As a USB graphics device has no GPU that could perform rendering, this
is actually an easier case than a multi-GPU system, where you could
choose which GPU to use, and they probably have semi-conflicting
requirements for the buffer placement and format. Because there is
only one GPU that can render, all clients will obviously use that
(or use CPU instead).
So, supporting dumb framebuffer devices as additional outputs is a
minor implementation detail in Weston, provided the kernel supports
the required buffer interoperability. It might need some changes
in GBM, too, but the kernel part is the major thing.
Whatever GPU a client uses to render into a buffer, the buffer should
be such that whatever GPU Weston is using can use the buffer as a
texture. And whatever GPU Weston uses, it should be compositing into a
buffer the particular scanout device can use.
Thanks,
pq
More information about the wayland-devel
mailing list