<html>
<head>
<base href="https://bugs.freedesktop.org/">
</head>
<body><table border="1" cellspacing="0" cellpadding="8">
<tr>
<th>Bug ID</th>
<td><a class="bz_bug_link
bz_status_NEW "
title="NEW - 3% perf drop in GfxBench Manhattan 3.0, 3.1 and CarChase test-cases"
href="https://bugs.freedesktop.org/show_bug.cgi?id=111090">111090</a>
</td>
</tr>
<tr>
<th>Summary</th>
<td>3% perf drop in GfxBench Manhattan 3.0, 3.1 and CarChase test-cases
</td>
</tr>
<tr>
<th>Product</th>
<td>DRI
</td>
</tr>
<tr>
<th>Version</th>
<td>DRI git
</td>
</tr>
<tr>
<th>Hardware</th>
<td>Other
</td>
</tr>
<tr>
<th>OS</th>
<td>All
</td>
</tr>
<tr>
<th>Status</th>
<td>NEW
</td>
</tr>
<tr>
<th>Severity</th>
<td>normal
</td>
</tr>
<tr>
<th>Priority</th>
<td>medium
</td>
</tr>
<tr>
<th>Component</th>
<td>DRM/Intel
</td>
</tr>
<tr>
<th>Assignee</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr>
<tr>
<th>Reporter</th>
<td>eero.t.tamminen@intel.com
</td>
</tr>
<tr>
<th>QA Contact</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr>
<tr>
<th>CC</th>
<td>intel-gfx-bugs@lists.freedesktop.org
</td>
</tr></table>
<p>
<div>
<pre>Between following drm-tip commits:
* e8f06c34fa: 2019y-05m-27d-14h-41m-23s UTC integration manifest
* 8991a80f85: 2019y-05m-28d-15h-47m-22s UTC integration manifest
There was a performance drop in GfxBench Manhattan 3.0 (gl_manhattan) & 3.1
(gl_manhattan31) and CarChase (gl_4) test-cases:
* testfw_app --gfx glfw --gl_api desktop_core --width 1920 --height 1080
--fullscreen 1 --test_id gl_manhattan
This drop is visible both with onscreen and offscreen tests, both with X server
/ Unity and Weston.
Performance drop is most clear with Weston / GLES on BXT, regardless of whether
test-case runs under Xwayland or is Wayland native, where it's close to 3%.
With X server / GL, perf drop is visible only in Manhattan 3.0 on BXT, and
marginally in Manhattan 3.1 on KBL GT3e.
No test-cases have improved their performance in this same time period.
This regression isn't visible on BDW GT2 or SKL GT2 (I have data only from
these 5 machines).
Looking at the iowait and RAPL data, at least the Manhattan 3.0 test-case is
now a bit more GPU (IO) bound, and GPU (uncore) uses clearly less power. I.e.
it seems that since end of May, kernel doesn't let Mesa to utilize GPU as fully
as before.</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the QA Contact for the bug.</li>
<li>You are on the CC list for the bug.</li>
<li>You are the assignee for the bug.</li>
</ul>
</body>
</html>