[virglrenderer-devel] "containered" virgl
Zach Reizner
zachr at google.com
Mon Jun 18 20:59:45 UTC 2018
On Mon, Jun 18, 2018 at 1:39 PM Dave Airlie <airlied at gmail.com> wrote:
>
> On 19 June 2018 at 04:12, Zach Reizner <zachr at google.com> wrote:
> > On Mon, Jun 18, 2018 at 12:38 AM Dave Airlie <airlied at gmail.com> wrote:
> >>
> >> I've heard people saying that having virgl being used in a container
> >> might be a good idea, but I don't really have the knowledge of what it
> >> would look like architecturally.
> >>
> >> Though I decided today to try and enhance vtest to avoid the software
> >> readback for putimage path, which is the main overhead.
> >>
> >> I've created two branches at [1],[2] below.
> >>
> >> This adds fd passing abilities to the vtest socket and passes the fd
> >> from DRI3 to the vtest server to use for running it's context on, and
> >> it fills out the get handle path to pass an fd back that
> >> can be passed to the X server for DRI3 operations.
> >>
> >> I've got openarena up and running on this, the gbm/EGL scanout
> >> allocation path is bit annoying, but otherwise the colors won't end up
> >> right at all. On my HSW openarena anholt.cfg runs at 130fps vs 160fps
> >> native, vs 50fps for the old vtest.
> >>
> >> Once you set MESA_LOADER_DRIVER_OVERRIDE=virtio_gpu it should try and
> >> open the vtest socket when it can't use the drm device node itself.
> >>
> >> I'm assuming this could be useful for container applications, but I'd
> >> really have to have someone from that world want this to be a solution
> >> and champion it a lot.
> >>
> >> I'm not sure how much more effort I can put into it without a decent use case.
> > For the crosvm and crostini use case, some kind of
> > containerization/jailing will be very useful.
> >
> > On the host side of crosvm, we use a minijail (namespaces, seccomp,
> > etc.) per device for security in case they get compromised. For the
> > gpu device, this is going to be complicated because both virglrenderer
> > and the GL implementation will be in the same sandbox and so will need
> > a rather open seccomp filter, along with access to /dev/dri nodes. I
> > haven't yet tried to implement this sandbox.
> >
> > On the guest side in crostini, there is an additional layer of
> > containerization via lxc. We bind mount or mknod devices are needed
> > within the container, so the virtio_gpu driver for mesa should load
> > properly without the override you mention. This seems to work today.
>
> For your case your containers are inside VMs by the looks of it, so you
> can expose virtio-gpu. I was more thinking the traditional docker/flatpak
> model where we might not want to give /dev/dri access to the container,
> but "trust" virglrenderer. It could be used for fallbacks for traditional
> containers where the container doesn't have up to date host drivers.
Ah, I see what you mean. That's a good point.
>
> Dave.
More information about the virglrenderer-devel
mailing list