[Intel-gfx] [PATCH] drm/i915: Remove second level open-coded rcu work
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Thu Feb 28 09:55:46 UTC 2019
On 28/02/2019 09:43, Chris Wilson wrote:
> Quoting Tvrtko Ursulin (2019-02-28 09:38:36)
>>
>> On 28/02/2019 08:29, Chris Wilson wrote:
>>> @@ -143,7 +141,15 @@ static u32 __i915_gem_park(struct drm_i915_private *i915)
>>>
>>> intel_display_power_put(i915, POWER_DOMAIN_GT_IRQ, wakeref);
>>>
>>> - return i915->gt.epoch;
>>> + /*
>>> + * When we are idle, it is an opportune time to reap our caches.
>>> + * However, we have many objects that utilise RCU and the ordered
>>> + * i915->wq that this work is executing on. To try and flush any
>>> + * pending frees now we are idle, we first wait for an RCU grace
>>> + * period, and then queue a task (that will run last on the wq) to
>>> + * shrink and re-optimize the caches.
>>> + */
>>> + i915_globals_park();
>>
>> I think this comment would be better placed in i915_globals_park.
>
> In i915_globals_park(), we have
>
> /*
> * Defer shrinking the global slab caches (and other work) until
> * after a RCU grace period has completed with no activity. This
> * is to try and reduce the latency impact on the consumers caused
> * by us shrinking the caches the same time as they are trying to
> * allocate, with the assumption being that if we idle long enough
> * for an RCU grace period to elapse since the last use, it is likely
> * to be longer until we need the caches again.
> */
>
> Yeah, the old comment is a bit too verbose now, but I think it's still
> worth having a comment there to explain what's about to be done and why
> we call it now.
>
> /* Tell the world we are idle and reap the benefits! */
>
> Too subtle?
Maybe even no comment needed (gasp!). When I looks at the function I see:
__i915_gem_park(...)
{
...
intel_engines_park(i915);
i915_timelines_park(i915);
i915_pmu_gt_parked(i915);
i915_vma_parked(i915);
...
i915_globals_park();
...
}
So it seems pretty obvious different components are parked from here. :)
Regards,
Tvrtko
More information about the Intel-gfx
mailing list