[PATCH v1 weston 07/11] tests: Add a fadein test
derekf at osg.samsung.com
Wed Nov 26 08:49:08 PST 2014
On 26/11/14 02:43 AM, Pekka Paalanen wrote:
> On Tue, 25 Nov 2014 10:15:04 -0600
> Derek Foreman <derekf at osg.samsung.com> wrote:
>> On 25/11/14 04:11 AM, Pekka Paalanen wrote:
>>> On Mon, 24 Nov 2014 18:48:51 -0800
>>> Bryce Harrington <bryce at osg.samsung.com> wrote:
>>>> On Mon, Nov 24, 2014 at 01:19:46PM +0200, Pekka Paalanen wrote:
>>>>> On Wed, 19 Nov 2014 15:06:22 -0800
>>>>> Bryce Harrington <bryce at osg.samsung.com> wrote:
>>>>>> This also serves as a proof of concept of the screen capture
>>>>>> functionality and as a demo for snapshot-based rendering verification.
>>>>>> Signed-off-by: Bryce Harrington <bryce at osg.samsung.com>
>>>>>> Makefile.am | 7 +++++-
>>>>>> tests/fadein-test.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++++++
>>>>>> 2 files changed, 70 insertions(+), 1 deletion(-)
>>> Client driven repaint would probably need to be a new global in the
>>> protocol, another test extension. Exposing it would be conditional to
>>> both the test module, and the current backend supporting it.
>>> It is difficult to call module functions from another module, so you
>>> likely need some function pointers in struct weston_compositor. One
>>> module sets them, and if the other module finds them non-NULL, it knows
>>> the other one supports client driven repaint.
>>> But, because for starters it would be supported only by the headless
>>> backend, which is not used for anything but testing, it's likely fine
>>> to make the first implementation in the headless backend and always
>>> expose it directly from there. No need to hook it up in the test module
>>> for now.
>>> Once we see how it works for headless, we can plan how to let other
>>> backends support it, if needed.
>> I've been thinking about this a bit lately, here's some random musing. :)
>> We could wrap any clock_gettime usage within weston (if we only care
>> about the headless compositor that's one call) - if we're using a test
>> clock it would return the time as controlled by the client, otherwise
>> it'd just call clock_gettime. I need to look more into how the timerfd_
>> stuff works, but I think the fds will need to be manually messed with on
>> clock change.
> Yes, controlling the current time is yet another request to add, it
> shouldn't come directly from the repaint completion.
> There are several categories of clock_gettime() calls:
> All calls of the like
> clock_gettime(output->compositor->presentation_clock, &ts);
> can be wrapped in a weston_compositor_read_presentation_clock() or
> Then there are the crash-too-early/often checks in desktop-shell/ that
> I don't think we need to override.
Mmhm, good point, I don't think we actually need an automated test for
> And... that's all. Any other clock_gettime() calls in the compositor
> can likely be regarded as bugs.
> Timers I imagine we would not care about. Animations are not driven
> by timers. There are things like idle timeout, which for testing
> probably just want to be disabled, unless testing the idle timeout
The only timer I was concerned about was the one in the headless backend
frame completion stuff - which I think will have to be replaced by
something that isn't actually a timer when we're controlling the clock
So yeah, I think you're right and timers aren't going to need any attention.
>> The client would advance the clock by a specified number of
>> (nano?)seconds (monotonically, always increasing) via protocol.
> Let's make it absolutely clear what happens: always send absolute time.
> We even have some rare cases where weston actually tests for "monotonic"
> time going backwards to detect broken things, IIRC.
Ok, I'll do it that way. :)
>> headless renderer would need changes to fake a refresh rate - I don't
>> think we'd want every clock advance to trigger a repaint. I think we're
>> better off with client driving the clock, and repaint being indirectly
>> controlled. Right now it just waits 16ms after a repaint then repaints
> I'd suggest driving both independently from the test client. That way
> we have the most control, and the compositor side is simpler.
> It's up to the test to drive it right.
> But, when we are not using client driven clock and repaint, then the
> headless backend would indeed benefit from a timer to fake a refresh
Right now it does have a 16ms timer to fake a refresh rate - do you see
any reason this needs to be configurable or more complicated than it is now?
For now I'm inclined to leave the un-client-driven repaint loop driven
by the simple 16ms timer.
>> In any event we do need to control the clock (ie: not just gate repaint)
>> to get deterministic snapshots of animations/presentation - if that's
>> really our end goal.
>> Catching the server fade-in would be problematic - the clock frobbing
>> would have to be turned on at some point, I'm not sure we want it to
>> default to on even for the headless compositor (I guess this could be a
>> command line parameter).
> Indeed, or any other way we configure weston.
>> I've wondered a bit about sharing this fake clock with clients -
>> wl_clock_gettime() or such, the first call would query the presentation
>> clock id, on every subsequent call it'd just call clock_gettime() on
>> that id if it's a normal clock. If it's the test clock it would query
>> the time from the server.
>> Is this too invasive? There's no weston client library, so I'm not
>> really sure exactly where this would have to be implemented. I think
>> jamming this into libwayland-client might be frowned upon.
>> Similar effects could be achieved with LD_PRELOAD and replacing
>> clock_gettime when running a test, but that's pretty nasty.
> I can't see yet, why we would need such get-clock API. We also don't
> have a place to put in either.
Ok, will stick to clock control for test clients only.
>> If we're going to do anything for normal clients I guess it should be
>> sorted before the presentation extension is finalized, since it relies
>> on clients calling clock_gettime() directly.
> What we are planning here should be strictly for test clients.
> If we ever have desktop components that use clocks for special things
> that we would need to frob, let's look at that then. For now, just
> ignore it.
> All we are planning here is basically for non-realtime testing, e.g.
> testing in build-bots and such. We have had some talk about also doing
> automated realtime testing on real hardware, like does Weston keep on
> reaching the full refresh rate when playing a 1080p video on, say, RPi
> if it once did. Relying on a particular real hardware platform allows
> testing much more, including timing sensitive operations (hw video
> decode combined with compositing) and driver/hw operations (does this
> video surface actually go to a hardware overlay). So far these are just
> plans, but this kind of things would never be part of the non-realtime
> test suite of 'make check'.
Very cool, except we run into a situation where make check can fail if a
cron job wakes up in the background. :D
Testing if something actually ended up in a plane would probably need
additional test protocol to query such things?
Anyway, I'm getting ahead of myself, that's an interesting problem for
another day. :)
> Just making a point of limiting the scope here, so we don't try to
> bite too much at once.
>> (as a complete tangent, I'm a little confused that the presentation
>> protocol is part of weston - does this mean that video players will
>> target specific compositors and not generically "wayland"?)
> Protocols usually get in Weston first when they are experimental, so we
> can have some wider testing on them. Most, including Presentation and
> xdg_shell, will move into the Wayland repository when they are deemed
> stable enough. Sub-surfaces already moved. The move is the final point
> where we can theoretically make compatibility-breaking changes.
Thanks for the explanation.
>>>>> That should make the fade-in test reliable, and it will help with
>>>>> Presentation testing too.
>>>>> The client driven repaint likely needs to be enabled per-test. If a
>>>>> test attempts to enable it, but the compositor still does not advertise
>>>>> the support(!) in protocol, the test should skip. This way real
>>>>> hardware backends can skip these tests automatically, as they cannot
>>>>> (or not implemented yet) support client driven repaint.
>>>>> How would that sound?
>>>> Sure, although I suspect all tests that care about the rendered output
>>>> will need to set this. Unless they care only about the initial frame.
>>> To the contrary, I would expect most tests to only care about the final
>>> stabilised image when all animations have ended (preferably never even
>>> I think that should be perfectly implementable without client driven
>>> repaint. Client driven repaint is only needed for timing sensitive
>>> tests, like for animations and testing Presentation extension.
>> Right - if we're not specifically testing animations we should probably
>> just disable them.
> Yeah. So first, let's do things that do not need any clock games.
Maybe our first tests should be for all output transforms - gives a good
way to validate the big transforms refactor...
I think just drawing two rectangles of different colors in appropriate
places and then using your suggested "check for solid color" functions
to verify they're moved appropriately...
(why two regions? because if we do just one we can false positive if
the whole screen ends up that color by mistake - bad zoom)
>> If we are testing animations it's possible we can write test apps that
>> just display the frame we're interested in for a really long time
>> anyway, which may be less direct but with care could produce valid results.
> Except the animations we usually would want to test are
>> Some aspects of presentation can be tested by playing the same video
>> frame for a very long time as well...
> No, not really, I think. Presentation is all about timings.
>> Is client driven repaint too much work for too little gain? I'm
>> inclined to think it's worthwhile, but don't want to go too far towards
>> implementation if the general concensus is no. :)
>> It gets a bit easier if we don't care at all about messing with the
>> clocks in non-test clients.
> Yes, let's restrict the clock games solely to test clients.
>>> You post damage to a surface and wait for the
>>> presentation_feedback.presented event. Then do the screenshot. Or
>>> actually you shouldn't even need to wait, because completing a
>>> screenshot likely requires one more repaint cycle, so shooting itself
>>> already guarantees a repaint.
>> Hmm, I'm not sure why the screenshot would require an extra repaint cycle?
> That's the way it's implemented. If you read the current frame on
> screen, it may already be outdated if the scenegraph state has changed.
> Your protocol stream just before the record_screenshot request
> may include state changes. There might be also some technical reasons I
> forget, but IIRC we read out the image just before pushing it to the
> display hw.
Ah, ok. I see this now in the screenshooter code. That's not how my
current (direct to file) implementation operates.
I'll follow screenshooter's example for the next revision.
More information about the wayland-devel