UserManualTests

Objectives

Last cycle, it happened oftenly that regressions on Unity were hardware specifics, leading to even not being able to boot the session on some intel graphic hardware. All was fine on nvidia ones and nothing noticeable.

We need before releasing a new version of unity in ubuntu having tests coverage for *what the user sees*, not only relying on unity internal state (for instance, it happens that the launcher "is shown" from an unity internal state, but there is a stacking issue preventing it actually appearing on the screen).

Distro is not upstream and we want a different approach and cross check with user testing which is only the way to ensure that we checked the basic workflow of what the user is supposed to do. Those test cases have to be executed in a wide variety range of hardwares, and we can't scale ourself in that direction: involving a lot of technical users running unstable but not technical enough for making bug reports is a nice way for addressing that goal.

Consequently, we need an entertaining tool, where you can complete all of just a subset of a manual testsuite and be rewarded for it.

We need as well to analyze easily server side those data.

Idea

Requirements

Data to gather

Test format

Must-have

Nice-to-have / ideas

Possible file layouts:

Storage

WebUI

Must-have

Nice-to-have / ideas

Existing solutions

We looked into two existing solutions: Checkbox and Litmus, but did not consider them adapted to our objectives:

Implementation Notes

Using results-tracker.ubuntu.com to store results look like the best solution

API between Interface and test cases manager

-> next test will depend on previous tests results (fails, succeed, etc.)

testrun.failure(comment="") testrun.success() testrun.skip()

UserManualTests (last edited 2011-11-22 11:46:01 by mne69-6-82-231-93-97)