[Mesa-dev] [PATCH 0/7] Wayland Prime Support v3

Axel Davy axel.davy at ens.fr
Fri Mar 7 09:12:02 PST 2014


I forgot to precise (for those who want to test) that to offload
on a gallium card, it needs the DRIimage driver extension support
in gallium. This is implemented in this branch:
http://cgit.freedesktop.org/~keithp/mesa/log/?h=dri3%2Bgallium 
<http://cgit.freedesktop.org/%7Ekeithp/mesa/log/?h=dri3%2Bgallium>

Without the DRIimage driver extension, egl won't launch on the requested
card using gallium, saying "wayland-egl: display is not render-node 
capable", because
DRIimage driver extension is required to work with render-nodes.

Axel Davy
> This is the updated version of this patch series:
> http://lists.freedesktop.org/archives/mesa-dev/2014-January/050817.html
>
> Here are some explanations about what the patches introduce:
>
> GPU offloading:
>
> First step is to indicate which GPU to use.
> For that the patches introduces two ways:
> . using DRI_PRIME
> . using the wanted_device_id_path_tag parameter of drirc
> under the driver name "init".
> Both take the ID_PATH_TAG of the device to use (a string).
> It is a tag filled by udev and looks like that: "pci-0000_01_00_0"
> To know the tag filled by udev on a file in /dev/dri/,
> you can use udevadm info.
> Alternatively, DRI_PRIME can take the parameter "1", which means
> "any other card than the compositor".
>
> Second there's two ways of using GPU offloading:
> . 1) Rendering to a linear buffer shared with the compositor
> . 2) Rendering to a tiled buffer, and copy the content to
> a linear buffer shared with the compositor.
>
> You want to use 1) for very light rendering, like compositing,
> but you want 2) for better performance (games).
>
> Note that since dma-buf fences are not merged yet, 1) can
> get glitches (you shouldn't see any if rendering is light and
> vsynced), and 2) can get tearings on some parts of the image
> (If the card is fast, you'll probably don't see any, but if the
> card has hard time rendering, you'll potentially show an older image
> rather than the current one. Using vsync is a partial fix.)
> So for the moment, using vsync is the best choice.
>
> 1) should be used if you want to launch a nested Wayland
> compositor.
> XWayland games cannot use 2) (it'll depends if X dri3 uses the same
> system or not), so we have to launch a nested Wayland compositor
> on the choosed gpu, and launch the game inside. This gives similar
> performance than 2)
>
> By default it will use 2). This can be configured with the env var
> DRI_BLIT (0: not blit (1), 1: use blit (2)), or the
> "blit_if_different_device" drirc parameter (again under "init"
> driver name)
>
> Configuration:
>
> In practice configuring drirc to make any nested weston
> launching on my dedicated card gives this .drirc file:
> <driconf>
>      <device screen="0" driver="init">
>          <application name="Weston" executable="weston">
>              <option name="wanted_device_id_path_tag" value="pci-0000_01_00_0" />
>              <option name="blit_if_different_device" value="false" />
>          </application>
>      </device>
> </driconf>
>
> Note that driconf doesn't detect "init" (it does only detect
> X Dri2 drivers), but isn't disturbed because we have an "init"
> section. We can fill the parameters using the driconf advanced gui.
>
>
> More specifically:
> . patch 1 fixes a bug in the logic to enable Primes fd.
> . patch 2 introduces DRI_PRIME and the choice of the GPU
> we use to render. It will use a linear buffer to render to
> when needed, and will share it with the compositor.
> . patch 3 introduces string parameters in drirc,
> because we want to indicate the GPU to use with a string
> . patch 4 introduces "init" as a driver name in drirc that
> will be checked for Wayland egl initialization, and a parameter
> to indicate the GPU we want to use. EGL checks this parameter
> to choose the gpu to use.
> . patch 5 adds a new blitImage function for DRIimage.
> . patch 6 adds an implementation for it for gallium.
> I haven't made the implementation for intel. (Which means the
> blit option isn't available when we offload on an intel card from
> a dedicated card, which looks like nobody will do)
> . patch 7 introduces a new mode using blitImage. We render to
> a buffer (with tiling) which is blitted to a shared buffer (a linear buffer)
>
> Axel Davy (7):
>    wayland: Fix the logic in disabling the prime capability
>    EGL/Wayland: DRI_PRIME support v3
>    drirc: Add string support
>    EGL/Wayland drirc: Use drirc to complement DRI_PRIME
>    DRIimage: add blitImage to the specification
>    Gallium/dri2: implement blitImage
>    EGL/Wayland: use blitImage when requested and on a different device
>
>   include/GL/internal/dri_interface.h             |  11 +-
>   src/egl/drivers/dri2/Makefile.am                |  12 +-
>   src/egl/drivers/dri2/egl_dri2.h                 |   3 +
>   src/egl/drivers/dri2/platform_wayland.c         | 357 +++++++++++++++++++++---
>   src/gallium/state_trackers/dri/drm/dri2.c       |  44 ++-
>   src/mesa/drivers/dri/common/xmlconfig.c         |  29 ++
>   src/mesa/drivers/dri/common/xmlconfig.h         |   7 +-
>   src/mesa/drivers/dri/common/xmlpool/t_options.h |  10 +
>   8 files changed, 429 insertions(+), 44 deletions(-)
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.freedesktop.org/archives/mesa-dev/attachments/20140307/9a6e65fd/attachment-0001.html>


More information about the mesa-dev mailing list