<div dir="ltr"><br><div class="gmail_extra"><br><br><div class="gmail_quote">On Fri, Aug 15, 2014 at 12:18 AM, Pekka Paalanen <span dir="ltr"><<a href="mailto:ppaalanen@gmail.com" target="_blank">ppaalanen@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On Thu, 14 Aug 2014 11:31:27 -0700<br>
Jason Ekstrand <<a href="mailto:jason@jlekstrand.net">jason@jlekstrand.net</a>> wrote:<br>
<br>
> On Tue, Aug 12, 2014 at 1:34 PM, Jasper St. Pierre <<a href="mailto:jstpierre@mecheye.net">jstpierre@mecheye.net</a>><br>
> wrote:<br>
><br>
> > In the xdg-shell thread recently, Pekka had a lot of concerns of the kind<br>
> > "what happens when the client makes an unexpected or illegal request.<br>
> > Should it be an error?"<br>
> ><br>
><br>
> Yes and no. More on that in a line or two.<br>
><br>
><br>
> > I have an answer for this: Any behavior not defined is undefined behavior.<br>
> > If it's not explicitly mentioned in the protocol, it's undefined behavior.<br>
> ><br>
> > Undefined behavior means that anything can happen. The compositor can send<br>
> > you a fatal error. It can mess up your objects in undetectable ways. The<br>
> > surface contents you present might now be garbage. Obviously, compositors<br>
> > should try *very* hard to catch all edge cases and send protocol errors,<br>
> > but it might be difficult or impractical that we can catch all cases.<br>
> ><br>
><br>
> I'm a little hesitant to above fully. I would rather reserve the term<br>
> "undefined behavior" to mean things that may not be what the client intends<br>
> but which are not fatal. That said, most things shouldn't be this kind of<br>
> undefined behavior; most client errors should be the kind that potentially<br>
> lead to death.<br>
<br>
</div>I'm fully with Jason here. I do want a category of behaviours that are<br>
not fatal, but also quite likely to not do what you except or what is<br>
good for the user looking at the screen or using the program. The term<br>
"undefined behaviour" seems to fit well, no? And we have already used<br>
the term in the core spec, IIRC.<br>
<br>
One example of it is destroying wl_buffer while it is reserved by the<br>
compositor. It's not a fatal error, but can cause a glitch on screen.<br>
It's not too different from a client rendering badly, or rendering into<br>
a wl_buffer that is reserved by the compositor.<br>
<div><div class="h5"><br>
> I think the better thing to do would be to clarify the protocol to say how<br>
> it should be used. Any time a client does something that's outside of what<br>
> the protocol says, it's liable to get killed. We should be more vigilant<br>
> when writing protocols to specify how it should properly be used so that we<br>
> aren't left with strange ambiguity. There a couple of reasons for this.<br>
><br>
> 1) It will make better protocols. If protocols are written as "this is<br>
> what you should do" it forces the writer to think about all of the valid<br>
> ways to use it. Most likely, those are the only ways of using it that will<br>
> be properly implemented in the compositor anyway, so clients doing anything<br>
> else are liable to run into problems.<br>
><br>
> 2) It's better for backwards-compatibility. It's easier to define a<br>
> certain behaviour in a later version if the result in earlier versions was<br>
> "don't do this, or I'll kill your client". If there were strange,<br>
> poorly-defined, behaviors before, we have to keep those working in future<br>
> versions.<br>
><br>
> 3) If there is something that should result in non-fatal undefined<br>
> behaviour, we want to identify and document it. This way we make sure that<br>
> compositors do something sensible and, at the very least, don't crash.<br>
><br>
> That said, I think it's better to clean up the protocol docs than to try to<br>
> make some blanket statement about undefined behaviour. Anywhere weston<br>
> currently kills the client without it being particularly well-documented, I<br>
> think we can feel free to put that in the "don't do this" category. But<br>
> I'd rather see us have those rules written down where they are relevant<br>
> than have another nebulous statement to help us interpret the other<br>
> nebulous statements.<br>
<br>
</div></div>Yeah, agreed.<br>
<br>
However, the protocol specifications are not only instructions on how to<br>
use it (from client perspective), there should we also intructions on<br>
how a compositor should behave when it matters. That is why I would<br>
encourage explicitly specifying that a protocol error is raised if a<br>
client does not do what is expected or does what is explicitly defined<br>
as an error (e.g. specifying a negative width when there is no sense<br>
in that).<br>
<br>
One reason is that error codes are interface-specific, and it is much<br>
better to raise a protocol error on the object involved, than just a<br>
wl_display message "You fail!", because we have an async protocol and<br>
it can be hard to track the error back to the request that caused it.<br>
<br>
Another reason is that testing apps between compositors would be a<br>
little more consistent. I don't expect everyone to test on every<br>
compositor out there, or even everyone to test even on Weston, before<br>
releasing an app.<br></blockquote><div><br></div><div>Yeah, it would be good if compositors tried to kill apps for bad behavior. We have to have those error-check paths anyway to keep the compositor from crashing. That said, I don't know that we always need to be more specific than something like XDG_SHELL_USAGE_ERROR. Specific things should have specific errors, but enumerating all possible errors with their own error enum is a pain. The big thing is that we shouldn't abuse the wl_display error enum like we have in the past; we should use the error system properly.<br>
</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class=""><br>
> > As a quick example, we have been discussing "surface roles" in the past<br>
> > week with xdg_surface. Unfortunately, Weston doesn't properly mark cursor<br>
> > or DND surfaces as having roles, and there's no easy way to check for it in<br>
> > the codebase.<br>
> ><br>
> > From my reading of the code, it seems what would happen is that the cursor<br>
> > would disappear as the desktop-shell code changes the surface's position,<br>
> > and on the next mouse move, the cursor would reappear as a wl_pointer.enter<br>
> > event is sent out over the new surface.<br>
> ><br>
><br>
> Deleting the cursor surface without wl_pointer.set_cursor(null) is<br>
> something that should probably count as "undefined behavior" in the sense<br>
> that it would be strange but not fatal. For instance, the compositor could<br>
> just hang on to the last buffer and keep using it. There are all sorts of<br>
> ways to gracefully handle that. However, the client shouldn't expect<br>
> anything in particular. It could, for instance, result in no cursor.<br>
><br>
><br>
> > Obviously, no valid program should ever try to reuse a cursor surface as<br>
> > an xdg_surface.<br>
> ><br>
><br>
> Yeah, feel free to kill the client for that. But we should add a line to<br>
> wl_pointer that says something like "Once a surface has been used in<br>
> set_cursor, it is considered a cursor surface and should never be used for<br>
> anything other than set_cursor".<br>
><br>
> Make sense?<br>
<br>
</div>Yes, we should definitely add that sentence, because right now there is<br>
no good point in the protocol that would actually take away the cursor<br>
role from a surface. I think we should for all cases which don't have<br>
an explicit clear way for the client to remove the role.<br>
<br>
(Hmm, actually there is wl_pointer.leave event...)<br>
<br>
But that still leaves open the question: if a role has been removed, is<br>
it allowed to set a different role?<br>
<br>
As I've said, the (Weston) code is cleaner and more symmetric when you<br>
write it as if you could switch roles, but from the protocol point of<br>
view it is not useful.<br>
<br>
Therefore I think I am willing agree, that changing a wl_surface role<br>
is an error. We can just add a boolean to weston_surface for "has role<br>
ever been set", and check it as needed. Or maybe use<br>
weston_surface::configure != NULL for that, if it does not lead to<br>
strange cases if the client is still committing on that surface after<br>
the role is gone.<br>
<br>
Why changing roles can never be useful is that all role-specific state<br>
is lost when the role is lost. That leaves only the wl_surface state,<br>
and that state is dispensable: it is easy for a client to recreate the<br>
state on a new wl_surface without a role. Right?<br>
<br>
There is a slight chance that making this change retro-actively will<br>
cause some apps to fail. The bug should be in toolkits in that case. Is<br>
this a serious concern?<br></blockquote><div><br></div><div>I think it's probably ok, but it's possible there are surface-recycling clients out there. I just looked at Weston and we don't actually prevent surface re-use. What we prevent is a surface being a cursor surface and another type of surface at the same time. I'm kind of a fan of changing it and seeing if anything breaks. If we have to, we can bump the wl_pointer version when we make the change.<br>
</div><div>--Jason<br></div></div></div></div>