[PATCH 1/3] drm/radeon: GPU virtual memory support v22

Jerome Glisse j.glisse at gmail.com
Mon Jan 9 07:44:15 PST 2012


On Mon, Jan 09, 2012 at 09:31:16AM +0100, Daniel Vetter wrote:
> On Sun, Jan 08, 2012 at 05:56:31PM -0500, Jerome Glisse wrote:
> > On Sun, Jan 8, 2012 at 9:05 AM, Daniel Vetter <daniel at ffwll.ch> wrote:
> > > Hi all,
> > >
> > > Meh, I've wanted to port the small set of helpers nouveau already has to
> > > handle per open fd gpu virtual address spaces to core drm, so that I could
> > > reuse them for i915. Just to go one small step towards unifying drivers in
> > > drm/* a bit ...
> > >
> > > Looks like I'll have another driver to wrestle or just forget about it and
> > > reinvet that wheel for i915, too.
> > >
> > > </slight rant>
> > >
> > > Cheers, Daniel
> > > --
> > 
> > I looked at nouveau before writing this code, thing is, in the end
> > there is little common code especialy when you take different path on
> > how you handle things (persistent or dynamic page table for instance).
> > Thought couple things can still be share. Note that the whole radeon
> > code is designed with the possibility of having several address space
> > per process, thought there is no use for such things today we believe
> > things like opencl+opengl can benefit of each having their own address
> > space.
> 
> - I've realized when looking through nouveau that we likely can't share
>   match more than a gem_bo->vma lookup plus a bunch of helper functions.
> 
> - Imo having more than one gpu virtual address space per fd doesn't make
>   much sense. libdrm (at least for i915) is mostly just about issueing
>   cmdbuffers. Hence it's probably easier to just open two fds and
>   instantiate two libdrm buffer managers if you want two address spaces
>   for otherwise you have to teach libdrm that the same buffer object still
>   can have different addresses (which is pretty much against the point of
>   gpu virtual address spaces).

Radeon barely use libdrm (only the ddx use part of it).

> 
> I also realize that in the dri1 days there's been way too much common code
> that only gets used by one or two drivers and hence isn't really commonly
> useable at all (and also not really of decent quality). So I'm all in
> favour for driver-specific stuff, especially for execution and memory
> management. But:
> 
> - nouveau already has gpu virtual address spaces, radeon just grew them
>   with this patch and i915 is on track to get them, too: Patches to enable
>   the different hw addressing mode for Sandybridge and later are ready,
>   and with Ivybridge hw engineers kinked out the remaining bugs so we can
>   actually context-switch between different address spaces without hitting
>   hw bugs.
> 
> - The more general picture is that with the advent of more general-purpose
>   apis and usecases for gpus like opencl (or also background video
>   encoding/decoding/transcoding with libva) users will want to control gpu
>   resources. So I expect that we'll grow resource limits, schedulers with
>   priorities and maybe also something like control groups in a few years.
>   But if we don't put a bit of thought into the commonalities of things
>   like gpu virtual address spaces, scheduling and similar things I fear we
>   won't be able to create a sensible common interface to allocate and
>   control resources in the feature. Which will result in a sub-par
>   experience. 

I wish we could come up with a common API for different GPU but the fact
is all kernel API we exposed so far is specific to each GPU beside
modesetting. Anythings that use the processing power of GPU use
dedicated API.

For instance radeon virtual address give control to userspace, ie userspace
decide the virtual address space at which each bo will be. On the contrary
nouveau do the allocation in the kernel. We did choose userspace to be in
charge for few reasons (top of my head) :
- allow to have 1:1 mapping btw cpu address space and gpu without having
  playing with process vm in the kernel
- allow easy command stream replay (no need to rewrite the command
  stream because virtual address are different)

Scheduling is next big things coming up, with lastest gen of gpu growing
the capacity to preempt gpu task and also offering a finer control over
the GPU cores. Again here i fear that we will each grow our own API.

My believe is that as long as we expose a low level API to use the GPU
there is no way we can provide a common API for things like scheduling
or virtual address space. The way you submit command is tightly related
to scheduling and virtual address stuff.

> But if my google-fu doesn't fail me gpu address spaces for radeon was
> posted the first time as v22 ever on a public list and merged right away,
> so there's been simply no time to discuss cross-driver issues.  Which is
> part of why I'm slightly miffed ;-)
> 

I too wish that we could have release this earlier. I guess it was included
because we still managed to get enough eyes familiar with radeon to look and
play with this code.

Cheers,
Jerome


More information about the dri-devel mailing list