<html>
<head>
<base href="https://bugs.freedesktop.org/">
</head>
<body>
<p>
<div>
<b><a class="bz_bug_link
bz_status_NEW "
title="NEW - With Vsync disabled, Weston almost halves *visible* frame update rate, compared to X desktop"
href="https://bugs.freedesktop.org/show_bug.cgi?id=106736#c13">Comment # 13</a>
on <a class="bz_bug_link
bz_status_NEW "
title="NEW - With Vsync disabled, Weston almost halves *visible* frame update rate, compared to X desktop"
href="https://bugs.freedesktop.org/show_bug.cgi?id=106736">bug 106736</a>
from <span class="vcard"><a class="email" href="mailto:eero.t.tamminen@intel.com" title="Eero Tamminen <eero.t.tamminen@intel.com>"> <span class="fn">Eero Tamminen</span></a>
</span></b>
<pre>(In reply to Pekka Paalanen from <a href="show_bug.cgi?id=106736#c11">comment #11</a>)
<span class="quote">> (In reply to Eero Tamminen from <a href="show_bug.cgi?id=106736#c9">comment #9</a>)
>> * All (3D) benchmarks run with Vsync disabled, and often games are run with
>> Vsync disabled too.
>
> If one benchmarks the GPU, why does it matter what goes on the screen? You
> cannot update the screen any faster than its refresh rate anyway.</span >
Some reasons:
* Customers think that jerky output from benchmarks looks bad (especially if
they're their own benchmarks)
* When screen output doesn't visually correspond to what benchmark is
reporting, people can start thinking that somebody is cheating
* For 3D driver benchmarking one should use offscreen tests, but things
modeling real applications often have only onscreen version. When compositor
overhead is basically random (instead of 60Hz, or benchmark FPS), you cannot
calculate/factor it
* If compositor on some platform skips screen updates, that gives unfair
performance advantage (number) on that platform
<span class="quote">> If you are benchmarking the full gfx stack all the way to screen, then run
> the application with vsync and if it consistently maintains the full refresh
> rate, measure the GPU load to get a performance number.</span >
While GPU load can give you some magnitude of performance, and for some really
trivial 3D operations [1], it could even be used to estimate the performance,
it's not really useful (accurate enough) for real 3D applications.
[1] such as compositing, which is 100% memory bandwidth bound and for which one
can actually calculate the expected performance.
<span class="quote">> I have a feeling that games are just poor. It's physically impossible to
> update the screen faster than the screen refreshes anyway. If screen update
> rate affects input latency for the game state, the game engine is badly
> designed. Traditionally it's been easy to just pump up the fps to work
> around all problems (and sell tons of expensive hardware) rather than doing
> something sensible like drawing at the right time instead of continuously
> hammering.</span >
>
<span class="quote">> Games need to be designed to have vsync on.</span >
Problem is that games have variable frame rate and people like to crank
settings as high as possible. They also have higher than 60 FPS monitor.
Solutions for avoiding getting only half the FPS with Vsync are either using
FreeSync/Gsync and monitor capable of that, or disabling Vsync.
Btw. How well Xwayland & Wayland compositors work with FreeSync (and Gsync)?
(FreeSync has been part of DP & HDMI specs quite a while now.)
<span class="quote">> However, Weston will never tear on presenting to screen, and Weston will
> never sample from a client buffer that has not finished drawing (implicit
> fencing).</span >
Windows (at last with Intel drivers) lets rendering tear if application is both
fullscreen and has disabled Vsync. I think this is reasonable, as compositor
shouldn't then be involved. Disabling Vsync kind of means that application
and/or user doesn't care about tearing.</pre>
</div>
</p>
<hr>
<span>You are receiving this mail because:</span>
<ul>
<li>You are the assignee for the bug.</li>
</ul>
</body>
</html>