[RFC PATCH] drm/sched: Fix a UAF on drm_sched_fence::sched

Christian König christian.koenig at amd.com
Fri Aug 30 12:57:00 UTC 2024


Am 30.08.24 um 11:37 schrieb Boris Brezillon:
> Hi Christian,
>
> On Fri, 30 Aug 2024 10:14:18 +0200
> Christian König<christian.koenig at amd.com>  wrote:
>
>> Am 29.08.24 um 19:12 schrieb Boris Brezillon:
>>> dma_fence objects created by an entity might outlive the
>>> drm_gpu_scheduler this entity was bound to if those fences are retained
>>> by other other objects, like a dma_buf resv. This means that
>>> drm_sched_fence::sched might be invalid when the resv is walked, which
>>> in turn leads to a UAF when dma_fence_ops::get_timeline_name() is called.
>>>
>>> This probably went unnoticed so far, because the drm_gpu_scheduler had
>>> the lifetime of the drm_device, so, unless you were removing the device,
>>> there were no reasons for the scheduler to be gone before its fences.
>> Nope, that is intentional design. get_timeline_name() is not safe to be
>> called after the fence signaled because that would causes circular
>> dependency problems.
> Do you mean the dma_fence layer should not call get_timeline_name()
> after it's been signalled (looking at the code/doc, it doesn't seem to
> be the case), or do you mean the drm_sched implementation of the fence
> interface is wrong and should assume the fence can live longer than its
> creator?

Neither, the crashing in an debugfs corner use case is simply acceptable 
behavior.

The problem is rather that when you start to create shedulers on demand 
this isn't a rare corner use case any more, but rather much easier to 
trigger problem.

On the other hand the kernel has tons (and I would guess thousands) of 
debugfs files which can crash the kernel trivially. Quite a bunch of 
them don't take all the necessary locks and look into internal data 
structures without any guarantee that those won't go away in the middle 
of a sprintf()...

>> E.g. when you have hardware fences it can happen that fences reference a
>> driver module (for the function printing the name) and the module in
>> turn keeps fences around.
>>
>> So you easily end up with a module you can never unload.
> On the other hand, I think preventing the module from being unloaded is
> the right thing to do, because otherwise the dma_fence_ops might be
> gone when they get dereferenced in the release path. That's also a
> problem I noticed when I started working on the initial panthor driver
> without drm_sched. To solve that I ended up retaining a module ref for
> each fence created, and releasing this ref in the
> dma_fence_ops::release() function.

Yeah that was what other drivers initially did as well, but that was 
reverted at some point and nobody really looked much into it.

The takeaway was that it's better to potentially crash on unload than to 
not allow unloading at all.

> drm_sched adds an indirection that allows drivers to not care, but
> that's still a problem if you end up unloading drm_sched while some of
> its drm_sched_fence fences are owned by external components.

And you're not the first one to report this. It's just that your 
solution looks better than what I've seen before.

>>> With the introduction of a new model where each entity has its own
>>> drm_gpu_scheduler instance, this situation is likely to happen every time
>>> a GPU context is destroyed and some of its fences remain attached to
>>> dma_buf objects still owned by other drivers/processes.
>>>
>>> In order to make drm_sched_fence_get_timeline_name() safe, we need to
>>> copy the scheduler name into our own refcounted object that's only
>>> destroyed when both the scheduler and all its fences are gone.
>>>
>>> The fact drm_sched_fence might have a reference to the drm_gpu_scheduler
>>> even after it's been released is worrisome though, but I'd rather
>>> discuss that with everyone than come up with a solution that's likely
>>> to end up being rejected.
>>>
>>> Note that the bug was found while repeatedly reading dma_buf's debugfs
>>> file, which, at some point, calls dma_resv_describe() on a resv that
>>> contains signalled fences coming from a destroyed GPU context.
>>> AFAIK, there's nothing invalid there.
>> Yeah but reading debugfs is not guaranteed to crash the kernel.
>>
>> On the other hand the approach with a kref'ed string looks rather sane
>> to me. One comment on this below.
> There's still the problem I mentioned above (unloading drm_sched can
> make things crash). Are there any plans to fix that?

At least not at the moment, but your patch here looks like it makes this 
a possibility.

Depending on the uapi taking a module reference for each created 
sheduler fence might even result in overflowing the reference count, but 
if you grab a module reference for each drm_sched_fence_timeline object 
than that will probably work quite fine.

Regards,
Christian.

> The simple option
> would be to prevent compiling drm_sched as a module, but that's not an
> option because it depends on DRM which is a tristate too. Maybe we
> could have drm_sched_fence.o linked statically, just like dma-fence.c
> is linked statically to prevent the stub ops from disappearing.
> Not sure if drm_sched_fence.c depends on symbols defined in
> sched_{main,entity}.c or other parts of the DRM subsystem though.
>
>>> +/**
>>> + * struct drm_sched_fence_timeline - Wrapped around the timeline name
>>> + *
>>> + * This is needed to cope with the fact dma_fence objects created by
>>> + * an entity might outlive the drm_gpu_scheduler this entity was bound
>>> + * to, making drm_sched_fence::sched invalid and leading to a UAF when
>>> + * dma_fence_ops::get_timeline_name() is called.
>>> + */
>>> +struct drm_sched_fence_timeline {
>>> +	/** @kref: Reference count of this timeline object. */
>>> +	struct kref			kref;
>>> +
>>> +	/**
>>> +	 * @name: Name of the timeline.
>>> +	 *
>>> +	 * This is currently a copy of drm_gpu_scheduler::name.
>>> +	 */
>>> +	const char			*name;
>> Make that a char name[] and embed the name into the structure. The macro
>> struct_size() can be used to calculate the size.
> Sure I can do that.
>
> Regards,
>
> Boris
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20240830/1b52ae48/attachment-0001.htm>


More information about the dri-devel mailing list