[Intel-gfx] [PATCH 3/5] drm/i915/guc: Add GuC ADS - scheduler policies
Yu Dai
yu.dai at intel.com
Thu Dec 17 10:36:02 PST 2015
On 12/16/2015 11:39 PM, Chris Wilson wrote:
> On Wed, Dec 16, 2015 at 01:40:53PM -0800, yu.dai at intel.com wrote:
> > From: Alex Dai <yu.dai at intel.com>
> >
> > GuC supports different scheduling policies for its four internal
> > queues. Currently these have been set to the same default values
> > as KMD_NORMAL queue.
> >
> > Particularly POLICY_MAX_NUM_WI is set to 15 to match GuC internal
> > maximum submit queue numbers to avoid an out-of-space problem.
> > This value indicates max number of work items allowed to be queued
> > for one DPC process. A smaller value will let GuC schedule more
> > frequently while a larger number may increase chances to optimize
> > cmds (such as collapse cmds from same lrc) with risks that keeps
> > CS idle.
> >
> > Signed-off-by: Alex Dai <yu.dai at intel.com>
> > ---
> > drivers/gpu/drm/i915/i915_guc_submission.c | 31 +++++++++++++++++++-
> > drivers/gpu/drm/i915/intel_guc_fwif.h | 45 ++++++++++++++++++++++++++++++
> > 2 files changed, 75 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
> > index 66d85c3..a5c555c 100644
> > --- a/drivers/gpu/drm/i915/i915_guc_submission.c
> > +++ b/drivers/gpu/drm/i915/i915_guc_submission.c
> > @@ -842,17 +842,39 @@ static void guc_create_log(struct intel_guc *guc)
> > guc->log_flags = (offset << GUC_LOG_BUF_ADDR_SHIFT) | flags;
> > }
> >
> > +static void init_guc_policies(struct guc_policies *policies)
> > +{
> > + struct guc_policy *policy;
> > + u32 p, i;
> > +
> > + policies->dpc_promote_time = 500000;
> > + policies->max_num_work_items = POLICY_MAX_NUM_WI;
> > +
> > + for (p = 0; p < GUC_CTX_PRIORITY_NUM; p++)
> > + for (i = 0; i < I915_NUM_RINGS; i++) {
>
> Please indent this properly.
>
> > + policy = &policies->policy[p][i];
> > +
> > + policy->execution_quantum = 1000000;
> > + policy->preemption_time = 500000;
> > + policy->fault_time = 250000;
> > + policy->policy_flags = 0;
> > + }
> > +
> > + policies->is_valid = 1;
> > +}
> > +
> > static void guc_create_ads(struct intel_guc *guc)
> > {
> > struct drm_i915_private *dev_priv = guc_to_i915(guc);
> > struct drm_i915_gem_object *obj;
> > struct guc_ads *ads;
> > + struct guc_policies *policies;
> > struct intel_engine_cs *ring;
> > struct page *page;
> > u32 size, i;
> >
> > /* The ads obj includes the struct itself and buffers passed to GuC */
> > - size = sizeof(struct guc_ads);
> > + size = sizeof(struct guc_ads) + sizeof(struct guc_policies);
> >
> > obj = guc->ads_obj;
> > if (!obj) {
> > @@ -884,6 +906,13 @@ static void guc_create_ads(struct intel_guc *guc)
> > for_each_ring(ring, dev_priv, i)
> > ads->eng_state_size[i] = intel_lr_context_size(ring);
> >
> > + /* GuC scheduling policies */
> > + policies = (void *)ads + sizeof(struct guc_ads);
> > + init_guc_policies(policies);
>
> Please limit atomic context to only the critical section, i.e. don't
> make me have to read every single function to check for violations.
Could you clarify this? I am not sure what's the atomic context and
critical section you mentioned here.
Alex
> > +
> > + ads->scheduler_policies = i915_gem_obj_ggtt_offset(obj) +
> > + sizeof(struct guc_ads);
> > +
> > kunmap_atomic(ads);
>
More information about the Intel-gfx
mailing list