<div dir="ltr"><br><br><div class="gmail_quote"><div dir="ltr">On Thu, Jan 21, 2016 at 12:03 PM Paulo Zanoni <<a href="mailto:paulo.r.zanoni@intel.com">paulo.r.zanoni@intel.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Instead of waiting for 50ms, just wait until the next vblank, since<br>
it's the minimum requirement. The whole infrastructure of FBC is based<br>
on vblanks, so waiting for X vblanks instead of X milliseconds sounds<br>
like the correct way to go. Besides, 50ms may be less than a vblank on<br>
super slow modes that may or may not exist.<br>
<br>
There are some small improvements in PC state residency (due to the<br>
fact that we're now using 16ms for the common modes instead of 50ms),<br>
but the biggest advantage is still the correctness of being<br>
vblank-based instead of time-based.<br>
<br>
v2:<br>
- Rebase after changing the patch order.<br>
- Update the commit message.<br>
v3:<br>
- Fix bogus vblank_get() instead of vblank_count() (Ville).<br>
- Don't forget to call drm_crtc_vblank_{get,put} (Chris, Ville)<br>
- Adjust the performance details on the commit message.<br>
v4:<br>
- Don't grab the FBC mutex just to grab the vblank (Maarten)<br>
<br>
Signed-off-by: Paulo Zanoni <<a href="mailto:paulo.r.zanoni@intel.com" target="_blank">paulo.r.zanoni@intel.com</a>><br>
---<br>
drivers/gpu/drm/i915/i915_drv.h | 2 +-<br>
drivers/gpu/drm/i915/intel_fbc.c | 39 +++++++++++++++++++++++++++++----------<br>
2 files changed, 30 insertions(+), 11 deletions(-)<br>
<br>
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h<br>
index 204661f..d8f21f0 100644<br>
--- a/drivers/gpu/drm/i915/i915_drv.h<br>
+++ b/drivers/gpu/drm/i915/i915_drv.h<br>
@@ -921,9 +921,9 @@ struct i915_fbc {<br>
<br>
struct intel_fbc_work {<br>
bool scheduled;<br>
+ u32 scheduled_vblank;<br>
struct work_struct work;<br>
struct drm_framebuffer *fb;<br>
- unsigned long enable_jiffies;<br>
} work;<br>
<br>
const char *no_fbc_reason;<br>
diff --git a/drivers/gpu/drm/i915/intel_fbc.c b/drivers/gpu/drm/i915/intel_fbc.c<br>
index a1988a4..3993b43 100644<br>
--- a/drivers/gpu/drm/i915/intel_fbc.c<br>
+++ b/drivers/gpu/drm/i915/intel_fbc.c<br>
@@ -381,7 +381,17 @@ static void intel_fbc_work_fn(struct work_struct *__work)<br>
container_of(__work, struct drm_i915_private, fbc.work.work);<br>
struct intel_fbc_work *work = &dev_priv->fbc.work;<br>
struct intel_crtc *crtc = dev_priv->fbc.crtc;<br>
- int delay_ms = 50;<br>
+ struct drm_vblank_crtc *vblank = &dev_priv->dev->vblank[crtc->pipe];<br>
+<br>
+ if (drm_crtc_vblank_get(&crtc->base)) {<br>
+ DRM_ERROR("vblank not available for FBC on pipe %c\n",<br>
+ pipe_name(crtc->pipe));<br>
+<br>
+ mutex_lock(&dev_priv->fbc.lock);<br>
+ work->scheduled = false;<br></blockquote><div><br></div><div>I couldn't understand this here... doesn't look related to s/time/vblank...</div><div>could you please explain?</div><div><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
+ mutex_unlock(&dev_priv->fbc.lock);<br>
+ return;<br>
+ }<br>
<br>
retry:<br>
/* Delay the actual enabling to let pageflipping cease and the<br>
@@ -390,14 +400,16 @@ retry:<br>
* vblank to pass after disabling the FBC before we attempt<br>
* to modify the control registers.<br>
*<br>
- * A more complicated solution would involve tracking vblanks<br>
- * following the termination of the page-flipping sequence<br>
- * and indeed performing the enable as a co-routine and not<br>
- * waiting synchronously upon the vblank.<br>
- *<br>
* WaFbcWaitForVBlankBeforeEnable:ilk,snb<br></blockquote><div><br></div><div>hm... is it still valid for newer platforms or we should put a if gen <=6 on these checks?</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
+ *<br>
+ * It is also worth mentioning that since work->scheduled_vblank can be<br>
+ * updated multiple times by the other threads, hitting the timeout is<br>
+ * not an error condition. We'll just end up hitting the "goto retry"<br>
+ * case below.<br>
*/<br>
- wait_remaining_ms_from_jiffies(work->enable_jiffies, delay_ms);<br>
+ wait_event_timeout(vblank->queue,<br>
+ drm_crtc_vblank_count(&crtc->base) != work->scheduled_vblank,<br>
+ msecs_to_jiffies(50));<br>
<br>
mutex_lock(&dev_priv->fbc.lock);<br>
<br>
@@ -406,8 +418,7 @@ retry:<br>
goto out;<br>
<br>
/* Were we delayed again while this function was sleeping? */<br>
- if (time_after(work->enable_jiffies + msecs_to_jiffies(delay_ms),<br>
- jiffies)) {<br>
+ if (drm_crtc_vblank_count(&crtc->base) == work->scheduled_vblank) {<br>
mutex_unlock(&dev_priv->fbc.lock);<br>
goto retry;<br>
}<br>
@@ -419,6 +430,7 @@ retry:<br>
<br>
out:<br>
mutex_unlock(&dev_priv->fbc.lock);<br>
+ drm_crtc_vblank_put(&crtc->base);<br>
}<br>
<br>
static void intel_fbc_cancel_work(struct drm_i915_private *dev_priv)<br>
@@ -434,13 +446,20 @@ static void intel_fbc_schedule_activation(struct intel_crtc *crtc)<br>
<br>
WARN_ON(!mutex_is_locked(&dev_priv->fbc.lock));<br>
<br>
+ if (drm_crtc_vblank_get(&crtc->base)) {<br>
+ DRM_ERROR("vblank not available for FBC on pipe %c\n",<br>
+ pipe_name(crtc->pipe));<br>
+ return;<br>
+ }<br>
+<br>
/* It is useless to call intel_fbc_cancel_work() in this function since<br>
* we're not releasing fbc.lock, so it won't have an opportunity to grab<br>
* it to discover that it was cancelled. So we just update the expected<br>
* jiffy count. */<br>
work->fb = crtc->base.primary->fb;<br>
work->scheduled = true;<br>
- work->enable_jiffies = jiffies;<br>
+ work->scheduled_vblank = drm_crtc_vblank_count(&crtc->base);<br>
+ drm_crtc_vblank_put(&crtc->base);<br>
<br>
schedule_work(&work->work);<br>
}<br>
--<br>
2.6.4<br>
<br>
_______________________________________________<br>
Intel-gfx mailing list<br>
<a href="mailto:Intel-gfx@lists.freedesktop.org" target="_blank">Intel-gfx@lists.freedesktop.org</a><br>
<a href="http://lists.freedesktop.org/mailman/listinfo/intel-gfx" rel="noreferrer" target="_blank">http://lists.freedesktop.org/mailman/listinfo/intel-gfx</a><br>
</blockquote></div></div>