Quantal

Introduction and Scope

This test strategy is meant to set up the scenery of where the QA team is at the moment and what the objectives are for the coming releases. We realize that getting to the level of excellence required by a project like Ubuntu is not easy and are willing to put together a strategy that will get us there in a reasonable amount of time and also that we can measure ourselves against. This is, by definition, a dynamic document that will need to be reassessed and refined as work progresses. The stakeholders of this document are Product Management, Engineering Management and the community as a whole, all of which will play an instrumental role in the execution of this strategy. The owner of this strategy is the QA team.

Aims/Objectives

The main objective is to take Ubuntu to its next level in terms of quality and excellence. Good measures of success will be the amount of requests for testing the QA team gets going forward and the trust of the whole community in terms of assessing the quality of the coming releases. Also, because all our deliverables will be public, we shall lead by example in terms of quality assurance practices for the industry. It is understood that Ubuntu would benefit from comprehensive and systematic testing and ideally this strategy will allow the community as a whole to benefit from the bugs found during our testing. We will also make sure good practices are understood and applied systematically by our community collaborators, so we will be reviewing thoroughly all the contributions and helping the contributors learn about good QA engineering practices. Main aims:

  • Make Ubuntu enterprise ready to increase the confidence of the users during the coming releases
  • Improve the user experience
  • Improve performance
  • Reduce the amount of bugs that escape the development phases
  • Optimize the community efforts and offer the opportunity to learn industry standard testing to those individuals willing to help
  • Automate as many test cases as necessary and organizing the manual test cases in a manageable way
  • Avoid duplication of efforts and make every helping hand count towards the end goal
  • Establish metrics for better quality assessment
  • (Please add anything that feels important)

Current situation

The Good

  • Jenkins, the test result reporting tool, is in place and fully functional
  • There are test cases that get somewhat the job done and the community is able to use them
  • There is a lab almost fully in place for running the test cases
  • There is a defect report that enables us to have a realistic picture of the current bug situation

The Bad

  • Lack of test case management system. At the moment all the test cases are kept in a website/repository that doesn't really allow a lot of flexibility and maintainability
  • Most current test cases tend to lack "Expected results", which makes them unreliable in terms of reporting results
  • Little automation
  • Not measurable coverage
  • There are no metrics, so no way of telling how effective current test cases are
  • Defect reporting is uneven and scattered, it is difficult to determine what is reported already and what is not
  • Little interest at the moment from the community in testing activities, testing is perceived as not fun/not very useful
  • No well defined role ownership/responsibilities
  • Lack of urgency regarding failed daily builds
  • Not sought after as an authoritative source surrounding bugs found or test results produced

How to get there

Precise objectives

Theme

Let's make quality or lack thereof visible to everyone.

See what we did for Precise.

Q-R Release

Theme

Establish a standard way of automating for Ubuntu and start adding test cases towards full coverage.

Ubuntu Automation Test Harness (UATH)

  • Objective

    • Have a test harness that makes creating end to end automated tests easy
    • Have a test harness that makes the running of automated tests easy for everyone

    Actions

    • Keep working on making Ubuntu Automation Test Harness stable and reliable
    • Add missing functionality (bare metal provisioning, provisioning from existing VM, QEMU without HW virtualisation)
    • Move all existing testing to UATH and deprecate all the existing scripts
    • Add the new test suites to the harness and make them available to everyone to use
    • Extend the provisioning plugins to the most common HW platforms used for testing by Ubuntu Engineering

Adding new test coverage

  • Objective

    • Have a comprehensive smoke testing suite that is reliable by the end of the cycle
    • Start building a comprehensive regression testing suite
    • Run the test cases on regular basis to catch regressions

    Action

    • Determine which test cases should be in the smoke testing suite and add them as new test suites become available in uath.
    • Create a script to transform http://gb.archive.ubuntu.com/ubuntu/dists/precise/main/source/ into a consumable format to work with and be able to prioritise the work.

    • Determine an algorithm to prioritise importance of packages in terms of getting tested
    • Create a test plan to validate GNOME dependencies are satisfied and ubuntu is not broken due to compatibility breaks.
    • Generate a list of all the bugs that have test cases associated and go through them, make a decision on which ones are worth automating and which ones are not

Metrics

  • Objective

    • Track progress and be able to determine how long will take us to get to 100% coverage and help us prioritize the work

    Actions

    • Set up targets based on the agreed metrics for the coming releases
    • # of regressions found by our test suites (this is meant to measure the test cases effectiveness, we could keep stats of defects found / test case, to know which test cases are the most effective at finding defects and be able to write more of those)
    • # new test cases
    • Start doing test escape analysis: find out how many bugs escaped our testing and why, can we automated test cases for them? Why weren't the test cases there in the first place? How many could we have found. How many are insignificant

QA&Release, building bridges (or QA Release Process)

  • Objective

    • Discuss about the problems and miscommunications and last minutes crisis that we had during Precise and find ways to make releases smoother from the QA viewpoint.

    Actions

    • Discuss what wnt well, what went wrong and how to get better going forward
    • Agree on what a broken build is
    • Discuss about what feature freeze means and how QA and Release can work together to de-risk bug fixes

QA Community

Needs balloons input:

  • Produce work packages that can be worked on by anyone in the community wishing to help, we should have different work-package sizes (1 day, 1 week, 1 month), to suit all needs
  • The community and developers are able to contribute new test cases and test code to the new test harness and all the contributions add up and are meaningful

18 months objectives

  • Start training the people in the community that want to contribute with testing so that we get consistent and reliable results (this can be in form of IRC meeting, webinars/videos or at UDS)
  • Start running static analysis tools on the code, so that the more obvious problems are found automatically and upfront.
  • There is a comprehensive set of test cases that we can run on daily builds and enables us to assess the quality of the builds
  • We can add test cases as new features are being developed (we have caught up with our backlog)

24 months objectives

  • Our testing is fully automated (i.e. automated as much as possible) and our overall coverage is close to or better than 80% functional, 50% conditional.
  • The amount of critical defects that escape our testing is very low

QATeam/AutomatedTesting/Strategy/Quantal (last edited 2013-04-10 08:14:36 by 188-221-246-203)