QA

QA

Upstream Kernel Bug Process and Relationship

The current workflow and relation can be improved by:

  1. Work with the Ubuntu kernel team to provide an upstream vanilla kernel package that our users can easily install and test.
    • Build a package for the latest upstream kernel -rc candidate or final release
    • Build a package for the most recent upstream stable kernel
  2. Work on re-educating our bug reporters to test the upstream kernel and report their bugs upstream:
    1. Bug is triggered using the Ubuntu kernel
    2. Next step is to test the stable kernel package which the Ubuntu kernel was based on. Use cat /proc/version_signature to determine which stable kernel package the Ubuntu kernel was based on.

      • If the bug does not exist with the stable kernel package this indicates the issue is with an Ubuntu specific patch(es)
      • If the bug exists with the stable kernel package, proceed to the next test
    3. Test the latest upstream kernel -rc candidate or final release package (which ever is the most recent)
      • If the bug does not exist in the upstream kernel, narrow down which patch to backport to the Ubuntu kernel
      • If the bug exists in the upstream kernel, report the bug upstream (create docs to instruct user the proper was to report the bug upstream - may vary between reporting the bug to a mailing list or filing a bug report at kernel.bugzilla.org)
  3. Enable kerneloops by default and enable apport to detect and report bugs from oopses on the next boot
    • Add extra version info to identify the kernel as an Ubuntu kernel so that oops reports are more useful to upstream
    • Ensure that the hardware certification infrastructure takes proper advantage of the oops reports - this includes booting the system with a working kernel / in a working mode to let apport generate the bug report
  4. Work with upstream kernel bugzilla maintainers to update to bugzilla 3.0 and install the Launchpad plugin

Elevate checkbox privileges with policykit

Checkbox currently runs completely as root which is not appropriate default behaviour for most end-user cases. Privileges should be elevated only when there is a specific need as defined in the test.

  • Checkbox in Ubuntu will run as a user by default
  • Indidividual tests will specify that they need elevated privilages
  • If elevated permissions are needed for any tests in a scheduled run Policykit will be called and the user will be prompted for a password once at the start of the run

Automate testing from -proposed

Uploads of certain packages to the -proposed archive should trigger a full test run on all certified hardware in Canonical's labs. We can currently do this manually but should add more automation.

  • We discussed which packages should triger a test run and how it should be queued
  • Test results should be published in a publicly accessible location
  • In the futue it should also be possible to trigger runs from the certification web interface

Managing needs-packaging bug reports

Currently, 'needs-packaging' bug reports are intermingled in the list of bugs without a package which is problematic for both bug triagers and Ubuntu developers.

  • In the longer term we need to separate these requests out from the overall pool of bugs, such as in a separate LP project, but that is blocked on bug 80902.
  • In the short term we will improve the situation by:
  • Setting all needs-packaging bug reports to an importance of Wishlist
    • Write a launchpadlib or python-launchpad-bugs script that finds, via the 'needs-packaging' tag, needs-packaging reports and sets bug importance to Wishlist. It should also add a comment that the bug was automatically set to an importance of Wishlist since the bug was tagged 'needs-packaging'.
    • Notify developer mailing list(s) regarding what will happen to needs-packaging bugs, include comment added by script and criteria used.
    • Setup script on cranberry
  • Generate a report of users_affected_count for needs-packaging bugs
    • Write a launchpadlib script, or a database query, that querys needs-packaging bug reports for user_affected_count and generates an html report. Report should include bug number (with a link), bug title and user_affected_count.
    • Publicize the fact that the report exists to entice people to use the "This bug affects me too" 'button' in Launchpad for needs-packaging reports

Expand checkbox's test coverage

Checkbox currently has a limited amount of test coverage, due to its history of being the hardware testing tool. We would like to expand its test coverage by incorporating other existing testsuites, like the security teams qa-regression-testing tests and ldtp based testing.

What tests are out there and what is appropriate for different cases? checkbox itself has a limited set of tests in it

  • qa-regression-testing
    • in a bzr branch not a package at this point in time
    • it is mostly for server packages or command line applications
    • should be incorporated for testing proposed packages and development releases

Organization

  • granularity - be able to run specific tests individually (a fix for a bug), then run the full test suite
  • Have a command line interface to checkbox to be able to run specific tests or suites of tests

Cherry picking tests is expensive because it is like forking from a project - so just start running all of the tests

  • If a test fails, that failure should be recorded and future failures of that test should be expected
  • create your baseline in the first run of the test suite so 45 out of 50 passed, the next test run you expect at least 45 passes - anything less than 45 is a problem

We should not write tests that have already been written, we should leverage upstream tests and trust that those tests are good

  • Extending them to cover more situations is ok, though, right?

Ubuntu QA Portal

The QA Team has started migrating various QA tracking pages to the new QA server. We should focus on moving any remaining pages over and write a central landing page to reference all of the information provided.

Jaunty Regression Management

A discussion about how we should improve the management of repoted regressions to avoid that they are released in Janty

  • Previous cycle we started tracking regressions with the following tags:
  • The challenge for Jaunty is to ensure that regression-potential bugs are escalated and milestoned

Package status pages - Jaunty updates

A discussion on how we continue to improve the status pages for Jaunty.

HIGH

  • historical view of data - ie like bryce's page (Needs change to php code)
    • individual graphical view of status' (possibly clickable from the graph headers)
  • look at upstream report to find packages to add - https://edge.launchpad.net/ubuntu/+upstreamreport

  • Reword "Most duplicates" to "Bug with most duplicates"

MEDIUM

LOW

  • Number of bugs carried over from release to release (missed milestone)
  • Change the theme for the links in the number : this is not obvious, I did not notice it for a while. (Needs change to the CSS style sheet)
    • will cause a conflict with the rest of the qa.ubuntu.com site so would require a separate CSS style sheet
    • additionally adding color changes will need modification

WISHLIST

  • breakdowns by release
  • delta's for increasing/decreasing number of duplicates and subscribers to indicate the popularity of a given bug
    • possibly leverage the mailing list to gather this info
  • bug gravity -
    • a mathematic calculation of the bugs impact using no of subscribers, comments, metoos and duplicates
  • Delta's of bug stats for each milestone; overlaps with http://people.ubuntu.com/~ogasawara/weatherreport.html

  • deltas should be clickable links to list of bugs e.g. the new bugs in the past seven days
    • wait to use the "bug bag" functionality that will come with LP 3.0
  • vertical lines for releases and package uploads
    • still don't know how to do this in gnuplot
  • bug info section for release targetted bugs

Regression tracker jaunty improvements

Regression tracker is a page for tracking the regression of the Ubuntu release, the page could be found at Regression Tracker, for Jaunty the improvements of this page are going to be:

Split into two components:

  • for developers, regressions to fix
    • use the Triaged status as a filter
  • for QA team, reports to help identify regressions
    • use New, Incomplete, Confirmed as a filter

For Developers:

  • Split out to per-tag pages (regression-release, etc.)
  • Filter out untriaged and low priority bugs
  • reports per teams (foundation, desktop, etc.)

For QA:

  • queries to identify possible regressions.
  • possibly record results of triaging.

Triaging of bugs with patches attached

During previous cycle a big amount (~1900) of bugs with patches attached was detected and an unknown number of reports with attachment but without being marked as a patch. During the session we discussed how the bugsquad could help to review these bugs and patches to get them included or not in Ubuntu in the short term. In order to improve this workflow we will:

* Create Bugs/Patches

  • include information on what qualifies as a patch
  • include a link to launchpad-greasemonkey-script that turns patches into stars
  • include screenshots regarding how to flag an attachment as a patch and unflag an attachment
  • e-mail Bug Squad regarding wiki page

* Add patch replies to Bugs/Repsonses. * Add links to bugs with patches flagged report. * Add links to bugs with unflagged patches report. * Document Patch Workflow

  • If a bug has a patch common actions include:
    • subscribe the appropriate sponsoring team to the bug if they are not subscribed and if something needs uploading
    • forward the patch upstream because it is wishlist etc...
    • set to work on later because we are at the wrong time in the release cycle

* Create a list of patches reports

  • patches in sponsoring queue
  • patches flagged as later
  • patches needing forwarding
  • patches that have been forwarded
  • everything else

* Write scripts for flagging / unflagging patches and adding comments and add to ubuntu-qa-tools. * Modify unfixed-bugs-with-patches report to not include bugs a sponsorship team is subscribed to. * Update patches reports to use new Launchpad report formatting.

Gnome desktop testing

Session to discuss the creation of a GNOME Desktop Testing framework, based on Ubuntu's.

  • Scope of testing is functional testing of the applications, then running multiple time in different architectures is not worth it.
  • How to make the upstream start testing using this?
    • QA will be responsible of the writing of the first few cases
    • If an upstream project decide to use the GNOME/Ubuntu Desktop Testing project for their UI tests, then provide hooks to make easier for them to run and update the tests.
    • Tests will be in the Ubuntu Desktop Testing project and forwarded upstream if needed
  • Possibility of doing a sprint about desktop testing.
  • How to add new test cases to be automated?
    • Using http://testcases.qa.ubuntu.com (We need to add this information)

    • Add a tag to a bug in LP to mark a bug to be automated.
    • Replace the XML file to define test suites for better integration with
  • GNOME project documetation at http://live.gnome.org/DesktopTesting

  • Results:
  • Spec: https://wiki.ubuntu.com/QATeam/Specs/GnomeDesktopTesting

Improving test definition in Checkbox

We need to update the way tests are defined for Checkbox to cover new use cases.

Existing fields:

Required

  • name - utf-8
  • plugin (to be "type") - a-z\w*, ne.I
  • description - multi line

Optional

  • command - multi line
  • depends - !?name([_]+name)*
  • requires (to be "run_if") - python-ish boolean exp/multi-exp
  • timeout - \d+
  • optional - true|false|0|1
  • architecture - i386|amd
  • category - desktop|server

Actions:

  • Create a way for checkbox to be forced to run all the tests regardless of depends
  • Have ERROR result statuses

Filing bugs from Checkbox

When performing tests, users should be encouraged to file bugs immediately to a) ensure the bug is filed and b) attach relevant information to the bug. This discussion covered how we should handle this from a user experience perspective.

Questions:

  • How does this relate to apport? Do we use apport to file the bug or simply use the same LP calls?
    • user should have the ability to indicate that they have already filled a bug report
    • will use apport and its data gathering (per-package hooks)
    • report each bug individually when it happens in the community version
    • identify that the bug report was filled when using checkbox via a tag
    • when there is a failure 3 options
      1. I want to file a bug
      2. I know there is already a bug about this
      3. I don't want to file a bug
  • How do we note that a bug is a regression?
    • tag the bug when it is being filled as regression- (ideally the tag will be dynamic depending on the release and package version being tested)
    • store a hash of the test, to identify whether it has changed, when recording results (pass / fail)
    • prompt to report bug should have a timeout in case of long running tests such as the X tests

Can apport queue up bug reports for filing later or report bugs from the console, ie in a server environment?

QA Bugs

A session led by LP Bugs developer Graham Binns and LP lead Christian Reis covering possible future improvements to Launchpad to improve the workflow of the QA team.

ISO tracker improvements for Jaunty

The ISO tracker is serving us well, but could with some improvements. In this session we gathered the various todo items.

  • Move to our own box and reinstall with LP integration
    • - Close: kernel/server/xorg .qa.ubuntu.com - henrik to follow up with IS regarding possibility of moving it to cranberry
    • Discuss integration of testcases.qa.ubuntu.com with the ISO tracker (parsing the wiki)
      • - get ACL setup on the testcases wiki [ACL is already setup. Admins are henrik, schwuk, sbeattie and apulido. ACLs can be be applied where required.]
  • Implement a drop-down menu for milestone on the reporting page, then add a specific tag to LP
    • bugs that are marked as duplicates show up in the reporting page - bug 220378 - invalid bug reports show up in the reporting page - bug 220379 - add me too count (?) for bugs in the reporting page
  • Get an additional status listing what's not completed - Parse cdimage/release generating the releases file out of it, then use it in "Add a build set" - Knowing who's doing what at a certain time (bug 291066?) - Being able to submit more than one result
    • used by the automated testing to add the result to the tracker - helps tracking people doing the same test on different hardware
  • hardware database integration - Statistics for HoF and Weather report

Hardware test result publication

When performing tests, users should be encouraged to file bugs immediately to a) ensure the bug is filed and b) attach relevant information to the bug. How we handle this from a user experience perspective is the focus of this blueprint.

Questions:

* How does this relate to apport? Do we use apport to file the bug or simply use the same LP calls?

  • user should have the ability to indicate that they have already filled a bug report
  • will use apport and its data gathering (per-package hooks)
  • report each bug individually when it happens in the community version
  • identify that the bug report was filled when using checkbox via a tag
  • when there is a failure 3 options
    • 1) I want to file a bug 2) I know there is already a bug about this 3) I don't want to file a bug

* How do we note that a bug is a regression?

  • tag the bug when it is being filled as regression-
    • ideally the tag will be dynamic depending on the release and package version being tested
  • store a hash of the test, to identify whether it has changed, when recording results (pass / fail)
  • prompt to report bug should have a timeout in case of long running tests such as the X tests

Can apport queue up bug reports for filing later or report bugs from the console, ie in a server environment?

Hardware Reporting with Checkbox

Although checkbox currently collects hardware information and submits it to Launchpad, there is no facility for users to view the collected data.

  • The goal is getting more users to report their hardware profile to Launchpad
  • Provide the user with a view of their hardware which may encourage them to upload their hardware
  • Lots of upstreams produce compatability lists. Why isn't this information used in the distribution?
    • cups for printing - sane for scanners
  • Checkbox could get information about hardware compatability by grabbing information from the upstream database, sane for example, and present that information to the user.
  • Checkbox could only present information about hardware for which tests exist.
    • other pieces of hardware would be greyed out since no test exists
      • should we tell the reporter that tests are missing(?) ("You have 2 devices that have no available tests" [How to help] (add device tests) )
  • we would query the hardware database to find hardware needing tests (based on quantities of users that have that hardware)

Streamlining the SRU validation process

Proposed updates to the stable release need to be verified to ensure we don't introduce any regression and that the fix actually works. Currently the process is not flowing as smoothly as it could. This can be iproved with a better SRU validation workflow, including adhering more closely to existing rules. Workflow changes are going to be introduced during the Jaunty cycle:

The Ubuntu SRU Team should be contacted to ensure packages don't get uploaded to -proposed without a TEST CASE, which is a requirement in the SRU Procedure.

If a package was uploaded and does not have a detailed description on how to perform the verification, the bug should be:

  • marked with the tag needs-testcase
  • add a comment saying that a TEST CASE is needed in order for the SRU Verification team to provide feedback on the package
  • contact the Ubuntu SRU Team and assign the bug to the bug fixer for provide the required TEST CASE.

For the bugs requiring specific hardware in order to perform the verification, a HARDWARE DESCRIPTION section should be added to the bug description.

A template for improve the process is going to be created and presented to the SRU Team in order to use it for the approvation of New Proposed SRU packages.

Test framework coordination

Discussion (not related with any spec) about coordination of teams using the same tools for tackling the same problem

  • Teams in canonical doing testing:
    • desktop experience team
    • oem qa team
    • online services QA/Operations
    • platform qa team
    • launchpad
  • Tools being used:
    • apport
    • desktop-testing (ldtp)
    • checkbox
  • Actions:
    • Create a resource for identifying tools being used for test automation
    • Test writing sprint(?)
    • Maybe a (bi)weekly meeting to coordinate efforts
    • Talk more about automated testing on public channel (ubuntu-devel) to drum up interest
    • Testing (or test writing?) jams to motivate the community
    • test writers on the HoF(?)

Test case wiki migration

Session to discuss the migration of the available test cases at https://wiki.ubuntu.com/Testing/Cases to the new testcases specific wiki at http://testcases.qa.ubuntu.com

  • Syntax of testcases
    • We need a better way to add attached documents or additional files to the test case.
    • https://testcases.qa.ubuntu/Applications/Application would be the path in the wiki

    • We could have a macro that pulls the correct screenshots for a test cases depending on the distributuion: i.e. We cave testcases.qa.ubuntu.com/Applications/Gedit, but when called from /Xubuntu/Applications/Gedit it will contain Xubuntu screenshots if available.
  • Syntax of the track number
    • Right now is going u-ff-001 (u for ubuntu, ff for the app (in this case firefox), and 3 number for the test case numbers)
    • Change that to ff-001 (2 letters for the application and 3 numbers for the test case number)
    • If it is a test case for a Launchpad bug then the testcase number will be lp-[bugnumber]
  • Tracking results
    • Chris maintains a table with results and then use a python script to parse it and convert it if necessary.
    • [Chris] We need to use testcases.qa.ubuntu.com to track results of tests. Maybe with macros. (in a ISO tracker manner)
    • ISO tracker and testcases.qa.ubuntu.com communication
  • Dave to set up a LP project to file bugs and have the code in bzr

UDSJaunty/Report/QA (last edited 2009-01-16 13:43:37 by cpc4-oxfd8-0-0-cust39)