<div dir="ltr">I tested here on HSW a full sw nuke/cache clean and I didn't liked the result.<div>It seems to compress less than the hw one and to recompress everything a lot and stay less time compressed. </div><div><br>
</div><div>So, imho v3 is the way to go.</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Aug 4, 2014 at 3:51 AM, Rodrigo Vivi <span dir="ltr"><<a href="mailto:rodrigo.vivi@intel.com" target="_blank">rodrigo.vivi@intel.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">According to spec FBC on BDW and HSW are identical without any gaps.<br>
So let's copy the nuke and let FBC really start compressing stuff.<br>
<br>
Without this patch we can verify with false color that nothing is being<br>
compressed. With the nuke in place and false color it is possible<br>
to see false color debugs.<br>
<br>
</div>Unfortunatelly on some rings like BCS on BDW we have to avoid Bits 22:18 on<br>
LRIs due to a high risk of hung. So, when using Blt ring for frontbuffer rend<br>
cache would never been cleaned and FBC would stop compressing buffer.<br>
One alternative is to cache clean on software frontbuffer tracking.<br>
<br>
v2: Fix rebase conflict.<br>
v3: Do not clean cache on BCS ring. Instead use sw frontbuffer tracking.<br>
<div class=""><br>
Signed-off-by: Rodrigo Vivi <<a href="mailto:rodrigo.vivi@intel.com">rodrigo.vivi@intel.com</a>><br>
---<br>
</div> drivers/gpu/drm/i915/i915_drv.h | 1 +<br>
drivers/gpu/drm/i915/intel_display.c | 3 +++<br>
drivers/gpu/drm/i915/intel_pm.c | 10 ++++++++++<br>
drivers/gpu/drm/i915/intel_ringbuffer.c | 10 +++++++++-<br>
4 files changed, 23 insertions(+), 1 deletion(-)<br>
<br>
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h<br>
index 2a372f2..25d7365 100644<br>
--- a/drivers/gpu/drm/i915/i915_drv.h<br>
+++ b/drivers/gpu/drm/i915/i915_drv.h<br>
@@ -2713,6 +2713,7 @@ extern void intel_modeset_setup_hw_state(struct drm_device *dev,<br>
extern void i915_redisable_vga(struct drm_device *dev);<br>
extern void i915_redisable_vga_power_on(struct drm_device *dev);<br>
extern bool intel_fbc_enabled(struct drm_device *dev);<br>
+extern void gen8_fbc_sw_flush(struct drm_device *dev, u32 value);<br>
extern void intel_disable_fbc(struct drm_device *dev);<br>
extern bool ironlake_set_drps(struct drm_device *dev, u8 val);<br>
extern void intel_init_pch_refclk(struct drm_device *dev);<br>
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c<br>
index 883af0b..c8421cd 100644<br>
--- a/drivers/gpu/drm/i915/intel_display.c<br>
+++ b/drivers/gpu/drm/i915/intel_display.c<br>
@@ -9044,6 +9044,9 @@ void intel_frontbuffer_flush(struct drm_device *dev,<br>
intel_mark_fb_busy(dev, frontbuffer_bits, NULL);<br>
<br>
intel_edp_psr_flush(dev, frontbuffer_bits);<br>
+<br>
+ if (IS_GEN8(dev))<br>
+ gen8_fbc_sw_flush(dev, FBC_REND_CACHE_CLEAN);<br>
}<br>
<br>
/**<br>
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c<br>
index 684dc5f..de07d3e 100644<br>
--- a/drivers/gpu/drm/i915/intel_pm.c<br>
+++ b/drivers/gpu/drm/i915/intel_pm.c<br>
@@ -345,6 +345,16 @@ bool intel_fbc_enabled(struct drm_device *dev)<br>
return dev_priv->display.fbc_enabled(dev);<br>
}<br>
<br>
+void gen8_fbc_sw_flush(struct drm_device *dev, u32 value)<br>
+{<br>
+ struct drm_i915_private *dev_priv = dev->dev_private;<br>
+<br>
+ if (!IS_GEN8(dev))<br>
+ return;<br>
+<br>
+ I915_WRITE(MSG_FBC_REND_STATE, value);<br>
+}<br>
+<br>
static void intel_fbc_work_fn(struct work_struct *__work)<br>
{<br>
struct intel_fbc_work *work =<br>
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c<br>
index 2908896..2fe871c 100644<br>
<div class="im HOEnZb">--- a/drivers/gpu/drm/i915/intel_ringbuffer.c<br>
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c<br>
@@ -406,6 +406,7 @@ gen8_render_ring_flush(struct intel_engine_cs *ring,<br>
{<br>
u32 flags = 0;<br>
u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;<br>
+ int ret;<br>
<br>
flags |= PIPE_CONTROL_CS_STALL;<br>
<br>
@@ -424,7 +425,14 @@ gen8_render_ring_flush(struct intel_engine_cs *ring,<br>
flags |= PIPE_CONTROL_GLOBAL_GTT_IVB;<br>
}<br>
<br>
- return gen8_emit_pipe_control(ring, flags, scratch_addr);<br>
+ ret = gen8_emit_pipe_control(ring, flags, scratch_addr);<br>
+ if (ret)<br>
+ return ret;<br>
+<br>
+ if (!invalidate_domains && flush_domains)<br>
+ return gen7_ring_fbc_flush(ring, FBC_REND_NUKE);<br>
+<br>
+ return 0;<br>
}<br>
<br>
static void ring_write_tail(struct intel_engine_cs *ring,<br>
</div><span class="HOEnZb"><font color="#888888">--<br>
1.9.3<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
_______________________________________________<br>
Intel-gfx mailing list<br>
<a href="mailto:Intel-gfx@lists.freedesktop.org">Intel-gfx@lists.freedesktop.org</a><br>
<a href="http://lists.freedesktop.org/mailman/listinfo/intel-gfx" target="_blank">http://lists.freedesktop.org/mailman/listinfo/intel-gfx</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div>Rodrigo Vivi</div><div>Blog: <a href="http://blog.vivi.eng.br" target="_blank">http://blog.vivi.eng.br</a></div><div> </div>
</div>