<div dir="ltr">Hi,<div class="gmail_extra"><br><div class="gmail_quote">On 30 September 2014 16:44, Jasper St. Pierre <span dir="ltr"><<a href="mailto:jstpierre@mecheye.net" target="_blank">jstpierre@mecheye.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>It's a great question, with a complicated answer. Part of this is the fault of the DRM kernel interface, which is being improved. Part of it is the fault of GL/EGL, which really doesn't have proper multi-GPU support.</div></div></blockquote><div><br></div><div>EGL_EXT_device_base is one way to handle this, although we're still missing API for the application to determine which GPU it should render on. Assuming proper cross-GPU support, then it should just work regardless, although with the penalty of a cross-GPU transfer/blit/stall.</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>GL/EGL on DRM devices can only be booted up on one card at a time due to the way the interface works. I'm not sure if there's any plans to change this. (It's possible to create multiple EGLDisplays and switch between them
to render, but this is really ridiculous, and EGL_WL_bind_display won't
work with that). That means that you're left with software rendering. Which is totally possible to do, but Weston still doesn't support that.</div></div></blockquote><div><br></div><div>Well yes, you do need to create multiple EGLDisplays. How else are you going to deal with disjoint extensions / configs / shader compilers, let alone figure out where to dispatch the drawing? EGL itself is a bit culpable here though, because without the mooted NVIDIA vendor-independent dispatch layer, you're going to require the same stack (Mesa, NVIDIA, Mali, whatever) on both GPUs.</div><div><br></div><div>BindWaylandDisplay is fine in theory: you create one EGLDisplay for each GPU you have, and then attach them all to the same wl_display. In practice, Mesa dies hard if you have two wl_drm instances attached to the same display, and reasonably enough. This is an implementation issue though.</div><div><br></div><div>I think the answer is going to look something like this:</div><div> - implement EGL_EXT_device_base inside Mesa</div><div> - work out the EGL/GL/GLES/CL dispatch problem (when I call glCompileShader, which shader compiler does it go to?)</div><div> - fix client-side-Mesa's buggy/non-existent handling of a wl_display with multiple EGLDisplays bound</div><div> - add infrastructure to Weston to work out which GPU should render which output, possibly involving multiple gbm instances</div><div> - work out the gbm dispatch problem, which I think basically means minigbm (and switching to EGL_KHR_surfaceless_context + GL_OES_sl_c)</div><div> - add Wayland protocol to hint clients as to which GPU they should use</div><div> - extend EGL_EXT_platform_wayland (or platform_base ... ?) to allow specification of a device (from device_base) to use when creating a display</div><div> - solve cross-device format issues once and for all (tiling, compression, etc), which I'm increasingly thinking has to be in the kernel</div><div><br></div><div>tl;dr: entirely solvable with Wayland, but a lot of infrastructure required in EGL/gbm/kernel</div><div><div><br class="">Cheers,</div><div>Daniel</div></div></div></div></div>