DesktopAutomatedTests

Differences between revisions 1 and 3 (spanning 2 versions)
Revision 1 as of 2007-10-04 21:51:26
Size: 2318
Editor: c55DDBF51
Comment:
Revision 3 as of 2007-10-30 19:11:23
Size: 5040
Editor: 12
Comment:
Deletions are marked like this. Additions are marked like this.
Line 10: Line 10:
Leverage the ATK framework to implement a set of automated test routines for the desktop. Leverage the ATK framework to implement a set of automated test routines
for the desktop.
Line 14: Line 15:
##This section should include a paragraph describing the end-user impact of this change. It is meant to be included in the release notes of the first release in which it is implemented. (Not all of these will actually be included in the release notes, at the release manager's discretion; but writing them is a useful exercise.)

##It is mandatory.
This release contains the beginnings of an automated desktop test suite
for GNOME/GTK software that support accessibility. The test suite can
be used by users as well as developers to verify that the installed system
works.
Line 20: Line 22:
##This should cover the _why_: why is this change being proposed, what justifies it, where we see this justified. At the moment, pretty much all quality assurance related testing in Ubuntu
is done manually. There are wiki pages with checklists of steps that
testers follow to verify that at least basic functionality works. This
works, but requires a lot of people to do the work, and it needs to be
done many times. Obviously, an automated solution would be better.

Testing all of Ubuntu involves many things. This blueprint is about testing
desktop functionality of an installed system, including the live CD. It does
not cover testing of the installer.
Line 24: Line 34:
 * Aaron wants to upload a new version of the libgnomepanzerwagen
 package (a library that is used by many GUI programs), and wants to
 verify before the upload that it doesn't break anything. He installs
 the new version on his development machine, and runs the automated
 desktop tests for the entire distribution to see that everything still
 works.

 * Bertha works on Ubuntu QA and wants to check that the daily ISO builds
 work. She installs each of them in a virtual machine, and runs the
 desktop test suite to verify the results.
 
 * Charlie supplies her customers with working systems based on Ubuntu,
 but also using some additional software she supplies herself. A new version
 of Ubuntu has been released, and she wants to make sure her customers can
 upgrade safely. She runs the upgrade on a test system, configured
 identically to those of her customers, and uses the test suite before and
 after to verify that nothing breaks. In addition, she has added some
 tests of her own, to verify that her own software also survives the
 upgrade.

 * Diogenes wants to help Ubuntu development and reports a bug he has
 found. To show how the bug can be reproduced, he starts a recording
 tool, does whatever is necessary to reproduce the bug, and the attaches
 the recording to the bug report. Developers can then use this to verify
 that the bug exists, and that their fix actually fixes the bug.
Line 25: Line 61:

The test suite is run as a non-priviledged user on an installed system.
The live CD is one such environment, but any installed Ubuntu (GNOME)
system should suffice. The system may run on real hardware or in an
emulator.
Line 28: Line 69:
##You can have subsections that better describe specific parts of the issue. At least the initial approach is to use the accessibility layer of the
GTK/GNOME library stack (the ATK layer) to implement this. ATK allows
programs to operate other programs more or less as if the user was
operating them. There are several tools using this interface: LDTP,
dogtail, Accerciser.

Using ATK currently means that only the GTK based Ubuntu flavors can
be tested. KDE4 is said to get an accessibility layer as well, but it is
not yet known to us whether that will be compatible with ATK or not.

Using an emulator such as Faumachine (http://www.faumachine.org) to
implement this would be possible, but means that it will not be
possible to run the tests on real hardware. That may or may not be
a significant limitation.

Eventually there needs to be a way to write test scripts by hand, and to
run the scripts. Ideally there will also be a recording tool that users
can use to record scripts to show bugs.

The test cases should concentrate on integration and functional aspects.
Detailed application specific tests should come from upstream developers,
as part of the code. Running them, however, should be possible.
Line 32: Line 94:
##This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like:

=== UI Changes ===

##Should cover changes required to the UI, or specific UI that is required to implement this

=== Code Changes ===

##Code changes should include an overview of what needs to change, and in some cases even the specific details.
 * Experiment with the various ATK-based tools, Faumachine, and other
 approaches, to find out which one would be the best one.
 
 * Pick one approach and implement the desktop tests in
 ["Testing/Cases"] with it. Skip the tests that require network
 access.
Line 44: Line 103:
##It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during CD testing, and to show off after release.

##This need not be added or completed until the specification is nearing beta.
Line 50: Line 105:
##This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.  * Programs are not usually designed to be tested this way. That results
 in recording tools working badly. Many widgets do not have names, for
 example.
 
 * Sharing tests with upstream developers and other distributions would
 be a good thing, both ways. Novell, for example, has test cases for
 Evolution and perhaps other programs as well. Novell uses LDTP. It may
 be necessary to work together with others to develop a culture for
 encouraging tests to be written, and for applications to be designed
 to be tested.
Line 53: Line 117:

Use this section to take notes during the BoF; if you keep it in the approved spec, use it for summarising what was discussed and note any options that were rejected.

----
CategorySpec

Please check the status of this specification in Launchpad before editing it. If it is Approved, contact the Assignee or another knowledgeable person before making changes.

Summary

Leverage the ATK framework to implement a set of automated test routines for the desktop.

Release Note

This release contains the beginnings of an automated desktop test suite for GNOME/GTK software that support accessibility. The test suite can be used by users as well as developers to verify that the installed system works.

Rationale

At the moment, pretty much all quality assurance related testing in Ubuntu is done manually. There are wiki pages with checklists of steps that testers follow to verify that at least basic functionality works. This works, but requires a lot of people to do the work, and it needs to be done many times. Obviously, an automated solution would be better.

Testing all of Ubuntu involves many things. This blueprint is about testing desktop functionality of an installed system, including the live CD. It does not cover testing of the installer.

Use Cases

  • Aaron wants to upload a new version of the libgnomepanzerwagen package (a library that is used by many GUI programs), and wants to verify before the upload that it doesn't break anything. He installs the new version on his development machine, and runs the automated desktop tests for the entire distribution to see that everything still works.
  • Bertha works on Ubuntu QA and wants to check that the daily ISO builds work. She installs each of them in a virtual machine, and runs the desktop test suite to verify the results.
  • Charlie supplies her customers with working systems based on Ubuntu, but also using some additional software she supplies herself. A new version of Ubuntu has been released, and she wants to make sure her customers can upgrade safely. She runs the upgrade on a test system, configured identically to those of her customers, and uses the test suite before and after to verify that nothing breaks. In addition, she has added some tests of her own, to verify that her own software also survives the upgrade.
  • Diogenes wants to help Ubuntu development and reports a bug he has found. To show how the bug can be reproduced, he starts a recording tool, does whatever is necessary to reproduce the bug, and the attaches the recording to the bug report. Developers can then use this to verify that the bug exists, and that their fix actually fixes the bug.

Assumptions

The test suite is run as a non-priviledged user on an installed system. The live CD is one such environment, but any installed Ubuntu (GNOME) system should suffice. The system may run on real hardware or in an emulator.

Design

At least the initial approach is to use the accessibility layer of the GTK/GNOME library stack (the ATK layer) to implement this. ATK allows programs to operate other programs more or less as if the user was operating them. There are several tools using this interface: LDTP, dogtail, Accerciser.

Using ATK currently means that only the GTK based Ubuntu flavors can be tested. KDE4 is said to get an accessibility layer as well, but it is not yet known to us whether that will be compatible with ATK or not.

Using an emulator such as Faumachine (http://www.faumachine.org) to implement this would be possible, but means that it will not be possible to run the tests on real hardware. That may or may not be a significant limitation.

Eventually there needs to be a way to write test scripts by hand, and to run the scripts. Ideally there will also be a recording tool that users can use to record scripts to show bugs.

The test cases should concentrate on integration and functional aspects. Detailed application specific tests should come from upstream developers, as part of the code. Running them, however, should be possible.

Implementation

  • Experiment with the various ATK-based tools, Faumachine, and other approaches, to find out which one would be the best one.
  • Pick one approach and implement the desktop tests in ["Testing/Cases"] with it. Skip the tests that require network access.

Test/Demo Plan

Outstanding Issues

  • Programs are not usually designed to be tested this way. That results in recording tools working badly. Many widgets do not have names, for example.
  • Sharing tests with upstream developers and other distributions would be a good thing, both ways. Novell, for example, has test cases for Evolution and perhaps other programs as well. Novell uses LDTP. It may be necessary to work together with others to develop a culture for encouraging tests to be written, and for applications to be designed to be tested.

BoF agenda and discussion

QATeam/Specs/DesktopAutomatedTests (last edited 2008-08-06 16:40:41 by localhost)