Advice on modifying Lavapipe to isolate JIT compilation in separate process

Josh Gargus jjosh at
Thu Apr 27 05:18:28 UTC 2023

Thanks for your advice!  I hadn't looked at Venus, but that seems like a
very promising place to start.

The other approach feels more approachable now too; it feels like there are
less "unknown unknowns", although there are plenty of known unknowns to
investigate (address independence was one that was already bugging be
before I wrote to this list).

It seems like Venus is the more straightforward approach, so I'm inclined
to just go with it.  However, it seems like there would be a performance
hit compared to only doing JIT compilation in a separate process.  Do you
have a rough sense of the performance hit of serializing everything over
Venus?  The answer will depend on the workload, I know.

Thanks again, it was very helpful!


On Wed, Apr 26, 2023 at 7:23 PM Dave Airlie <airlied at> wrote:

> On Thu, 27 Apr 2023 at 05:27, Josh Gargus <jjosh at> wrote:
> >
> > Hi, I'm from the Fuchsia team at Google.  We would like to provide
> Lavapipe as an ICD within Fuchsia.  However, our default security policy is
> to deny client apps the capability to map memory as writable/executable; we
> don't want to relax this for every client app which uses Vulkan.
> Therefore, we are investigating the feasibility of splitting "Lavapipe"
> into two parts, one of which runs in a separate process.
> >
> > "Lavapipe" is in quotes because we don't know quite where the split
> should be (that's what I'm here to ask you); perhaps it wouldn't be within
> Lavapipe per se, but instead e.g. somewhere within llvmpipe.
> >
> > Another important goal is to make these changes in a way that is
> upstreamable to Mesa.
> >
> > We considered a few different options, deeply enough to convince
> ourselves that none of them seems desirable.  These ranged from proxying at
> the Vulkan API level (so that almost everything runs in a separate process)
> to doing only compilation in the separate process (into shared memory that
> is only executable, not writable, in the client process).
> Have you considered using venus over a socket/pipe to do it at the
> vulkan layer? (just asking in case you hadn't).
> >
> > This exploration was limited by our unfamiliarity with the corner cases
> of implementing a Vulkan driver.  For example, we're not quite clear on how
> much code is generated outside of vkCreateGraphics/ComputePipelines().  For
> example is there any code generated lazily, perhaps at draw time, to
> optimize texture sampling?  That's just one question we don't know the
> answer to, and there are surely many other questions we haven't thought to
> ask.
> >
> > Rather than delve into such minutiae, I'll simply ask how you recommend
> approaching this problem.  Again, the two main constraints are:
> > - no JITing code in the client process
> > - clean enough solution to be upstreamable
> So code is generated in a lot of places, particularly at shader bind
> and at draw time depending on bound textures/samplers etc. I think
> your best bet would be to maybe split a client/server model in the
> gallivm layer. gallivm_compile_module to gallivm_jit_function are
> where the LLVM executions happen, so you'd have to construct enough of
> a standalone gallivm/LLVM environment to take an LLVM module, compile
> it, pass back the JITed code in shared memory like you said. I'm not
> sure how memory address independent the resulting binaries from llvm
> are, or if they have to be placed at the same address. There are also
> a bunch of global linkages for various things that have to be hooked
> up, so there would need to be some thought around those (debug printf,
> coroutine malloc hooks, and clock hook).
> Dave.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the mesa-dev mailing list