A few questions about the best way to implement RandR 1.4 / PRIME buffer sharing
Aaron Plattner
aplattner at nvidia.com
Thu Aug 30 10:31:36 PDT 2012
So I've been experimenting with support for Dave Airlie's new RandR 1.4 provider
object interface, so that Optimus-based laptops can use our driver to drive the
discrete GPU and display on the integrated GPU. The good news is that I've got
a proof of concept working.
During a review of the current code, we came up with a few concerns:
1. The output source is responsible for allocating the shared memory
Right now, the X server calls CreatePixmap on the output source screen and then
expects the output sink screen to be able to display from whatever memory the
source allocates. Right now, the source has no mechanism for asking the sink
what its requirements are for the surface. I'm using our own internal pitch
alignment requirements and that seems to be good enough for the Intel device to
scan out, but that could be pure luck.
Does it make sense to add a mechanism for drivers to negotiate this with each
other, or is it sufficient to just define a lowest common denominator format and
if your hardware can't deal with that format, you just don't get to share
buffers?
One of my coworkers brought to my attention the fact that Tegra requires a
specific pitch alignment, and cannot accommodate larger pitches. If other SoC
designs have similar restrictions, we might need to add a handshake mechanism.
2. There's no fallback mechanism if sharing can't be negotiated
If RandR fails to share a pixmap with the output sink screen, the whole modeset
fails. This means you'll end up not seeing anything on the screen and you'll
probably think your computer locked up. Should there be some sort of software
copy fallback to ensure that something at least shows up on the display?
3. How should the memory be allocated?
In the prototype I threw together, I'm allocating the shared memory using
shm_open and then exporting that as a dma-buf file descriptor using an ioctl I
added to the kernel, and then importing that memory back into our driver through
dma_buf_attach & dma_buf_map_attachment. Does it make sense for user-space
programs to be able to export shmfs files like that? Should that interface go
in DRM / GEM / PRIME instead? Something else? I'm pretty unfamiliar with this
kernel code so any suggestions would be appreciated.
-- Aaron
P.S. for those unfamiliar with PRIME:
Dave Airlie added new support to the X Resize and Rotate extension version 1.4
to support offloading display and rendering to different drivers. PRIME is the
DRM implementation in the kernel, layered on top of DMA-BUF, that implements the
actual sharing of buffers between drivers.
http://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt?id=randrproto-1.4.0#n122
http://airlied.livejournal.com/75555.html - update on hotplug server
http://airlied.livejournal.com/76078.html - randr 1.5 demo videos
-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------
More information about the dri-devel
mailing list