Extending subsequent tests with dogtail tests?
markus.mohrhard at googlemail.com
Sun Feb 24 16:34:33 UTC 2019
let me add a few comments based on many years implementing testing
frameworks for LibreOffice and most recently working on the UI testing.
On Sat, Feb 23, 2019 at 6:24 PM Samuel Thibault <sthibault at hypra.fr> wrote:
> Noel Grandin, le sam. 23 févr. 2019 12:19:19 +0200, a ecrit:
> > However, the current python UI test stuff talks directly to the vcl/
> > But between the vcl/ widgets and an actual accessibility user lies at
> least two
> > major chunks of code - the generic accessibility/ stuff, and the
> > system specific bridges.
> Yes, that's my concern with testing only at the vcl layer.
> That said, we could as well make tests work at both layers. Run them
> along uitests, thus very frequently, and run them periodically through
> the accessibility layer on a system which has it.
If my understanding of your idea is correct you want to test actually two
things that are required for working accessibility for users. On one hand
you need to test the accessibility APIs and on the other hand correct focus
handling. I think you most likely want to test them independently if
possible as I think you'll discover that combining them will lead to hard
to diagnose problems. Additionally, the target of any test framework has to
be that it minimizes the number of random test failures and requires as
little adoption for unrelated code changes as possible.
I noticed after quickly looking at your proposed tests is that you use UI
strings in your tests which I think you want to avoid as much as possible.
There are several problems with UI strings that make them a bad property
during testing: they change often, often not even by developers and are
localized. Especially the second point means that your tests suddenly only
work in en-US which surely limits a bit the usefulness of the tests. Even
more if you plan to generate test cases automatically as it means that the
tests can only be generated with an en-US locale.
On a slightly related note I think that we have already quite a few tests
for the accessibility UNO layer but as that layer is full of bugs many of
the tests are disabled. It might be a good idea to work on these tests
before actually trying to implement more complex tests that depend on lower
layers working correctly. The focus handling can be easily integrated into
the existing UI testing infrastructure and might benefit there from some of
the concepts that should make them more stable (deadlock detection,
addressing UI elements through ID instead of UI visible strings, mostly
working async dialog and action handling) and I think it makes sense to
check how much of the accessibility part can not be covered with a
combination of UI testing and accessibility UNO API testing. It might even
be possible to get accessibility event interception into the UI testing to
check that events are generated correctly during actions. I would need to
look into how our accessibility handling works in the VCL layer to be sure
that this actually works.
For the dependencies we surely don't want to include the source code itself
into LibreOffice's external dependency handling. At least for the beginning
the easiest approach might be something like an
--enable-accessibility-tests flag which can be checked in configure.ac
where we make sure all required dependencies are around. If we ever decide
that they should be run by default we can still switch the default value
and make it possible for people to override the setting in their autogen
On a slightly related note, please make sure that you are developing
against python 3. I did not check if you are actually using any pure
python2 features but at least "#!/usr/bin/python" will get you a python 2
interpreter on most systems.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the LibreOffice