[Intel-gfx] [PATCH 1/2] drm/i915: Fix a potential UAF at device unload

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Thu Sep 22 09:37:02 UTC 2022


On 21/09/2022 16:53, Das, Nirmoy wrote:
> 
> On 9/9/2022 10:55 AM, Tvrtko Ursulin wrote:
>>
>> On 08/09/2022 21:07, Nirmoy Das wrote:
>>> i915_gem_drain_freed_objects() might not be enough to
>>> free all the objects and RCU delayed work might get
>>> scheduled after the i915 device struct gets freed.
>>>
>>> Call i915_gem_drain_workqueue() to catch all RCU delayed work.
>>>
>>> Suggested-by: Chris Wilson <chris.p.wilson at intel.com>
>>> Signed-off-by: Nirmoy Das <nirmoy.das at intel.com>
>>> ---
>>>   drivers/gpu/drm/i915/i915_gem.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/i915/i915_gem.c 
>>> b/drivers/gpu/drm/i915/i915_gem.c
>>> index 0f49ec9d494a..e8a053eaaa89 100644
>>> --- a/drivers/gpu/drm/i915/i915_gem.c
>>> +++ b/drivers/gpu/drm/i915/i915_gem.c
>>> @@ -1254,7 +1254,7 @@ void i915_gem_init_early(struct 
>>> drm_i915_private *dev_priv)
>>>     void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
>>>   {
>>> -    i915_gem_drain_freed_objects(dev_priv);
>>> +    i915_gem_drain_workqueue(dev_priv);
>>>       GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list));
>>>       GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count));
>>>       drm_WARN_ON(&dev_priv->drm, dev_priv->mm.shrink_count);
>>
>>
>> Help me spot the place where RCU free worker schedules itself back to 
>> free more objects - if I got the rationale here right?
> (Sorry for late reply, was on leave last week.)
> 
> I had to clarify this with Chris. So when driver frees a obj, it does 
> dma_resv_fini() which will drop reference
> 
> for all the fences in it and a fence might  reference to an object and 
> upon release of that fence can trigger a  release reference to an object.

Hmm I couldn't find that in code but never mind. It's just a stronger 
version of the same flushing and it's not on a path where speed matters 
so feel free to go with it.

Regards,

Tvrtko


More information about the Intel-gfx mailing list