<div dir="ltr">On 6 November 2013 11:24, Erik Faye-Lund <span dir="ltr"><<a href="mailto:kusmabite@gmail.com" target="_blank">kusmabite@gmail.com</a>></span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">On Wed, Nov 6, 2013 at 7:49 PM, Paul Berry <<a href="mailto:stereotype441@gmail.com">stereotype441@gmail.com</a>> wrote:<br>
> On 6 November 2013 09:35, Erik Faye-Lund <<a href="mailto:kusmabite@gmail.com">kusmabite@gmail.com</a>> wrote:<br>
>><br>
>> Sorry for the late reply.<br>
>><br>
>> On Fri, Nov 1, 2013 at 4:14 PM, Tapani <<a href="mailto:tapani.palli@intel.com">tapani.palli@intel.com</a>> wrote:<br>
>> > On 11/01/2013 03:31 PM, Erik Faye-Lund wrote:<br>
>> >><br>
>> >><br>
>> >> Won't using the git-sha1 as a compatibility-criteria cause issues for<br>
>> >> developers with local changes? I'm not so worried about this for<br>
>> >> OES_get_program_binary itself, but once the shader-cache is in place<br>
>> >> it sounds like a potential source of difficult to track down<br>
>> >> misbehavior...<br>
>> >>>>><br>
>> >>>>><br>
>> >>>>> I agree it might be too aggressive criteria but it is hard to come<br>
>> >>>>> up<br>
>> >>>>> with<br>
>> >>>>> better and as simple.<br>
>> >>>><br>
>> >>>> That's not my objection. My objection is that this might give<br>
>> >>>> headaches for people with local modifications to the glsl-compiler.<br>
>> >>>> Local modifications does not affect the git-sha1.<br>
>> >>><br>
>> >>><br>
>> >>> For the automatic shader cache this headache could be helped a bit<br>
>> >>> with a<br>
>> >>> environment variable or drirc setting that can be used during<br>
>> >>> development.<br>
>> >>> On the other hand an automatic cache must work in a transparent way so<br>
>> >>> it<br>
>> >>> should be always able to recover when it fails, so one should only see<br>
>> >>> it<br>
>> >>> as<br>
>> >>> 'slower than usual' (since recompilation/relink required) sort of<br>
>> >>> behaviour.<br>
>> >>> The WIP of the automatic cache I sent some time earlier also marked<br>
>> >>> (renamed) these 'problematic' cached shaders so that they can be<br>
>> >>> detected<br>
>> >>> on<br>
>> >>> further runs and cache can ignore those.<br>
>> >>><br>
>> >>> I agree that it might become problematic, on the other hand it is also<br>
>> >>> easy<br>
>> >>> to just wipe ~/.cache/mesa and disable cache.<br>
>> >><br>
>> >> That won't help for programs that maintain their own (explicit)<br>
>> >> shader-cache, which was the intention of introducing binary shaders to<br>
>> >> OpenGL ES in the first place.<br>
>> ><br>
>> ><br>
>> > Ok, we are talking about the extension, I thought you referred to the<br>
>> > automatic caching. For extension to work, we need at least more Piglit<br>
>> > tests<br>
>> > to ensure that it does not break.<br>
>><br>
>> I was actually of talking about both. But for the caching, it's<br>
>> probably more forgivable, as developers probably know they changed the<br>
>> compiler and can step around it by flushing the cache. Especially if<br>
>> the build time gets included, like Pauls suggested.<br>
>><br>
>> > Of course every time you go and touch the<br>
>> > code, some functionality may break, be it this extension or something<br>
>> > else.<br>
>><br>
>> That's completely irrelevant here. The rejection mechanism isn't<br>
>> intended to catch bugs, but to detect intentional format changes. So<br>
>> let's not confuse the issue.<br>
>><br>
>> > I'm not sure if Chromium, Qt or other users expect glBinaryProgram call<br>
>> > to<br>
>> > always succeed, hopefully not.<br>
>><br>
>> If they do, they're buggy. But there is a chance for that, yes. But<br>
>> I'm actually talking about that they might get *told* that everything<br>
>> went well, and still get a broken shader. Or even a crash. Of course,<br>
>> this only applies when mesa is build with local modifications, but<br>
>> that happens a *lot* while debugging application issues. Perhaps bugs<br>
>> start to disappear, because applications take another non-buggy<br>
>> code-path? It'll probably only affect developers, and not happen in<br>
>> the wild. But I still don't think that's a good excuse.<br>
>><br>
>> However, by a strict reading of the spec, I don't even yhink we're<br>
>> allowed to break the shaders for just any reason. The wording of the<br>
>> spec is "An implementation may reject a program binary if it<br>
>> determines the program binary was produced by an incompatible or<br>
>> outdated version of the compiler". The way I read that, changes that<br>
>> doesn't modify the compiler aren't really allowed to reject previous<br>
>> shaders. While diverging from the spec on this *might* not have many<br>
>> real-world implications, at the very best your solution goes against<br>
>> the intention of this rejection-mechanism.<br>
><br>
><br>
> We have to regard any bug fix, feature improvement, or performance<br>
> optimization that's applied to the parser, ast, optimization passes, or<br>
> lowering passes as constituting a change to the compiler.<br>
<br>
</div></div>Not quite. Bug-fixes are fine; if the compiled shader was buggy,<br>
you'll have a buggy binary shader, and it should be loaded with those<br>
bugs intact. Feature improvements also does not matter: surely, a<br>
pre-compiled shader won't use newer features. The same goes for<br>
performance optimization; a program that pre-compiled a shader from an<br>
old compiler surely won't expect to get a performance boost from new<br>
optimization passes. If so, they'll use source shaders instead, and<br>
pay the price of re-compiling. This is *exactly* what binary shaders<br>
are for.<br>
<div class="im"><br>
> Otherwise when<br>
> the user upgrades Mesa, any apps that use get_program_binary will fail to<br>
> get the advantage of the upgraded code.<br>
<br>
</div>Yes, as I said, that's a part of the contract.<br>
<div class="im"><br>
> As a practical matter, nearly every<br>
> sMesa release makes changes to one of those components, even point releases.<br>
> So allowing old program binaries in the event that the compiler hasn't<br>
> changed really wouldn't produce any end user benefit.<br>
<br>
</div>Even if *some* modification happened to the compiler, that's not a<br>
good reason to reject the binary shader IMO. Only *relevant* changes.<br>
And it's not untypical for distro-maintainers to cherry-pick patches<br>
from time to time. And then there's bleeding-edge users.<br>
<div class="im"><br>
> And it would risk big<br>
> developer headaches if we ever make a change that affects compilation<br>
> without realizing it. IMHO the risks outweigh the benefits.<br>
<br>
</div>As someone who has gone down this path before; it's not as bad as you<br>
make it to be. If you attack the problem from the right angle, that<br>
<div class="im">is.<br>
<br>
> And as far as whether we're strictly conforming to the spec, the spec is<br>
> vague here since it doesn't define what constitutes the "version of the<br>
> compiler" vs the "version of the driver". I think it's an entirely<br>
> defensible interpretation to consider the compiler version and the driver<br>
> version to be the same thing; so whenever the driver is upgraded, the client<br>
> application needs to be prepared for the possibility that the shader binary<br>
> might be rejected.<br>
<br>
</div>Now you're making terminology up. The spec doesn't talk about the<br>
driver version at all. And it's not unclear at all from the OpenGL<br>
spec what the task of the compiler is, so no. I don't agree that it's<br>
a defensible position in that regard. I'm not saying it isn't<br>
defensible in *any* regard, but that's not it.<br>
<div class="im"><br>
>> Storing halfway compiled code rather than fully compiled code seems<br>
>> quite sketchy to me, as it introduces the need for compatibility at a<br>
>> point where flexibility is needed. The result is a program-binary that<br>
>> 1) must still perform register allocation and scheduling when loading,<br>
>> so the time-saving is limited, and 2) fails to reload even for trivial<br>
>> bugfixes in unrelated code. To me, it sounds like it would be better<br>
>> to simply not have the extension exposed than this middle-ground. I'd<br>
>> hate to see applications having to do vendor detection for mesa to<br>
>> disable caching that turned out to be more of a burden than a gain.<br>
><br>
><br>
> A key piece of discussion that I feel like we're missing is the fact that on<br>
> i965 at least, the shader sometimes needs to be recompiled as a result of<br>
> changes to GL state. The get_program_binary spec requires these state-based<br>
> recompiles to work properly, even if the GL state machine is placed in a<br>
> state that hadn't been seen at the time the program binary was saved.<br>
<br>
</div>What kind of state are we talking about? I thought all GPU vendors had<br>
learned from NV's GeForce FX fiasco to stay away from non-trivially<br>
patchable state-dependeny shaders...<br></blockquote><div><br></div><div>The GL state that triggers recompiles on i965 can be found in the following structs:<br><br>- brw_wm_prog_key (in src/mesa/drivers/dri/i965/brw_wm.h) for fragment shaders<br>
</div><div>- brw_vs_prog_key (in src/mesa/drivers/dri/i965/brw_vs.h) for vertex shaders<br></div><div>- brw_gs_prog_key (in src/mesa/drivers/dri/i965/brw_vec4_gs_visitor.h) for geometry shaders<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div class="im"><br>
> Therefore, the program binary format needs to include the IR, since that's<br>
> what we use as a starting point when we do state-based recompiles.<br>
<br>
</div>I think that's more an indication that i965 is a bad fit for such an<br>
extension, rather than that the IR must be included. In fact, when we<br>
defined the extension, discouraging state-dependent shaders were<br>
explicitly one of the goals.<br>
<br>
Fundamentally redefining what the extension does for *all*<br>
mesa-drivers sounds to me more like intel policy than what's best for<br>
everybody. If the binary format was a driver-specific blob container<br>
like I suggested, intel could *still* serialize the IR into it. But<br>
everyone else won't have to suffer from it.<br>
<div class="im"><br>
> Given<br>
> that the IR format is defined in core mesa, it makes sense that the<br>
> serialization code for it should go in core mesa as well.<br>
><br>
> I agree with you that in order to implement the full intent of the<br>
> extension, this mechanism needs to also store the fully-compiled machine<br>
> code for the shader. I brought this up in my comments on patch 5/6, and it<br>
> sounds like Tapani is on board with doing that, but he has some more<br>
> research to do before he can implement that part of the feature.<br>
><br>
> If in the future we have a back end that doesn't need to do state-based<br>
> recompiles (or that can do state-based recompiles without needing access to<br>
> the IR), then we can always add a mechanism that allows the back end to opt<br>
> out of storing the IR in the shader binary. But since Tapani is targeting<br>
> i965, which needs the IR, I think it's reasonable to get that part of things<br>
> working first.<br>
><br>
<br>
</div>I guess I don't understand why these needs to be two different things,<br>
rather than en intel-specific implementation of some generic<br>
mechanism.<br>
<br>
Anyway, Tapani asked me to elaborate why I thought it was a bad idea<br>
in the first place, and I did. Hopefully, I was clear enough.<br>
</blockquote></div><br></div></div>