[PATCH 2/2] drm/nouveau: DRM_NOUVEAU_SET_ZCULL_CTXSW_BUFFER

Danilo Krummrich dakr at kernel.org
Fri Mar 28 11:48:36 UTC 2025


On Thu, Mar 27, 2025 at 03:01:54PM -0400, M Henning wrote:
> On Thu, Mar 27, 2025 at 9:58 AM Danilo Krummrich <dakr at kernel.org> wrote:
> >
> > On Fri, Mar 21, 2025 at 07:00:57PM -0400, M Henning wrote:
> > > This is a pointer in the gpu's virtual address space. It must be
> > > aligned according to ctxsw_align and be at least ctxsw_size bytes
> > > (where those values come from the nouveau_abi16_ioctl_get_zcull_info
> > > structure). I'll change the description to say that much.
> > >
> > > Yes, this is GEM-backed. I'm actually not entirely sure what the
> > > requirements are here, since this part is reverse-engineered. I think
> > > NOUVEAU_GEM_DOMAIN_VRAM and NOUVEAU_GEM_DOMAIN_GART are both okay. The
> > > proprietary driver allocates this buffer using
> > > NV_ESC_RM_VID_HEAP_CONTROL and sets attr = NVOS32_ATTR_LOCATION_ANY |
> > > NVOS32_ATTR_PAGE_SIZE_BIG | NVOS32_ATTR_PHYSICALITY_CONTIGUOUS, attr2
> > > = NVOS32_ATTR2_GPU_CACHEABLE_YES | NVOS32_ATTR2_ZBC_PREFER_NO_ZBC.
> >
> > (Please do not top post.)
> >
> > What I mean is how do you map the backing GEM into the GPU's virtual address
> > space? Since it's bound to a channel, I assume that it must be ensured it's
> > properly mapped when work is pushed to the channel. Is it mapped through
> > VM_BIND?
> 
> Yes. The userspace code for this is here:
> https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/33861/diffs?commit_id=0c4baab863730f9fc8b417834ffcbb400f11d617
> It calls into the usual function for driver internal allocations
> (nvkmd_dev_alloc_mem) which calls VM_BIND internally.

BOs mapped through VM_BIND are prone to eviction, is this a problem here, or is
it fine if it is only ensured that this mapping is valid for the duration of
subsequent EXEC jobs?

Does the mapping need to be valid when DRM_NOUVEAU_SET_ZCULL_CTXSW_BUFFER is
called? If so, how is this ensured?

Can DRM_NOUVEAU_SET_ZCULL_CTXSW_BUFFER be called in between multiple
DRM_NOUVEAU_EXEC calls?

Does it maybe need an async mode, such as EXEC and VM_BIND? (To me it doesn't
seem to be the case, but those questions still need an answer.)

I also think we should document those things.

> I don't understand: why is this line of questioning important?

By sending those patches you ask me as the maintainer of the project to take
resposibility of your changes. In this case it even goes further. In fact, you
ask me to take resposibility of a new interface, which, since it is a uAPI, can
*never* be removed in the future after being released.

It is part of my job to act responsibly, which includes understanding what the
interface does, how it is intended to be used, whether it is sufficient for its
purpose or if it has any flaws.

> 
> > >
> > > On Thu, Mar 20, 2025 at 2:34 PM Danilo Krummrich <dakr at kernel.org> wrote:
> > > >
> > > > On Wed, Mar 12, 2025 at 05:36:15PM -0400, Mel Henning wrote:
> > > > > diff --git a/include/uapi/drm/nouveau_drm.h b/include/uapi/drm/nouveau_drm.h
> > > >
> > > > Same here, please split the uAPI change in a separate commit.
> > > >
> > > > > index 33361784eb4e..e9638f4dd7e6 100644
> > > > > --- a/include/uapi/drm/nouveau_drm.h
> > > > > +++ b/include/uapi/drm/nouveau_drm.h
> > > > > @@ -448,6 +448,20 @@ struct drm_nouveau_get_zcull_info {
> > > > >       __u32 ctxsw_align;
> > > > >  };
> > > > >
> > > > > +struct drm_nouveau_set_zcull_ctxsw_buffer {
> > > > > +     /**
> > > > > +      * @ptr: The virtual address for the buffer, or null to bind nothing
> > > > > +      */
> > > > > +     __u64 addr;
> > > >
> > > > What is this buffer? Is this a GEM object backed buffer? How is it mapped?


More information about the Nouveau mailing list