[RFC PATCH] drm/i915/gvt: split sched_policy for adding more policies
Zhenyu Wang
zhenyuw at linux.intel.com
Wed Dec 18 08:48:30 UTC 2019
On 2019.12.18 16:32:28 +0800, Hang Yuan wrote:
> On 12/18/19 4:32 PM, Zhenyu Wang wrote:
> > On 2019.12.18 15:07:40 +0800, Hang Yuan wrote:
> > > On 12/18/19 1:43 PM, Zhenyu Wang wrote:
> > > > On 2019.12.18 13:07:34 +0800, Hang Yuan wrote:
> > > > > On 12/18/19 10:49 AM, Zhenyu Wang wrote:
> > > > > > On 2019.12.17 18:32:43 +0800, hang.yuan at linux.intel.com wrote:
> > > > > > > From: Hang Yuan <hang.yuan at linux.intel.com>
> > > > > > >
> > > > > > > Leave common policy codes in sched_policy.c and move time based
> > > > > > > scheduling to new file sched_policy_weight.c. Add module parameter
> > > > > > > "gvt_scheduler_policy" to choose one policy.
> > > > > > >
> > > > > >
> > > > > > Before any plan to split scheduler, what's the requirement for new
> > > > > > policy? What's the design? Would that be integrated with default
> > > > > > weight for different type? Need to understand that first to decide
> > > > > > whether or not we have to have seperate schedulers which I'm not favor
> > > > > > of if can't be done by default..
> > > > > >
> > > > > The new policy is not mature yet. Just see one user scenario where there are
> > > > > two vgpus, one is in foreground VM and another is in background VM. For some
> > > > > reason, the background VM can't be paused but end user is not using it. So
> > > > > its vgpu looks like not necessary to have fixed capacity as the scheduling
> > > > > policy right now.
> > > >
> > > > True.
> > > >
> > > > > Instead, want to try best to serve foreground vgpu and
> > > > > just avoid time out for gfx driver in background VM. Here are some rough
> > > > > codes based on the previous patch to schedule vgpu on priority and use a
> > > > > timer to increase vgpu's priority if it waits too long.
> > > >
> > > > yeah, current method for balance is still based on fixed weight for
> > > > target vGPU type. I think you want fine-grained control of run
> > > > timeslice over vGPU's activity? or you want fixed priority? I think
> > > > the foreground or background could be switched, right?
> > > >
> > > > Could we apply vGPU activity statistics in current scheduler? vGPU
> > > > type's weight is kind of static default allocation, we still use that
> > > > as base indicator for vgpu timeslice, but we'd also dynamically update
> > > > real execute timeslice based vgpu history. From that point of view, we
> > > > don't need another scheduler.
> > > >
> > > Yes, VM can be switched between foreground and background. I think
> > > fine-grained control may not fit this case because the "weight" is
> > > determined by the switch action, not historical data.
> > >
> >
> > Or we can have a 'nice' value to set for each vGPU from sysfs which could
> > be handled in current scheduler for that purpose?
> >
> Yes, we can use sysfs to change weight in runtime for this case as well.
> Thanks for the comments.
weight is something fixed for vGPU type, I don't think it should be able
to change via sysfs, but nice value could be used as indicator for required
schedule policy.
--
Open Source Technology Center, Intel ltd.
$gpg --keyserver wwwkeys.pgp.net --recv-keys 4D781827
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 195 bytes
Desc: not available
URL: <https://lists.freedesktop.org/archives/intel-gvt-dev/attachments/20191218/370181b4/attachment.sig>
More information about the intel-gvt-dev
mailing list