activities

Revision 29 as of 2016-06-08 11:01:09

Clear message

Team Structure
align="middle"

Home
History of U+1
Team FAQ
Contact U+1
Join U+1

Team WorkIconsPage/picto_engineering_48.png

Blog
Staff
Roles
Activities
Agenda

Docs & ToolsIconsPage/picto_articles_48.png

Testers FAQ
Testers Wiki
Tools
Library
Ubuntu Forums

Ideas & ProjectsIconsPage/picto_education_48.png

Brainstorming
ToDo
Ongoing
Instructional Development

The U+1 Team is looking for new members. Only basic skills are needed for most tasks. This is an opportunity to join a friendly and talented community, learn fast and be an active part in Ubuntu future. Click here to know more.

Activities Planner and Development Release Schedule

  • Period: YY (16.10) Development Cycle (May 2016 to October 2016).

  • Start date: Release date of Xenial Xerus (May 2016).

  • Weeks: Start on a Thursday.

  • Aim: To identify testing opportunities that fit in with the development release process.

  • QA Cadence weeks run from Saturday to Saturday.

  • Information on this page is drawn from various sources freely available to the public.

Yakkety Yak Release Schedule

Methods and Means

ISO Image Testing - Cadence Testing

  • Every 2 weeks as part of our testing cadence,, we download copies of the latest daily ISO images, burn them to CDs (or load them into VM's) and test them. This brings to light many issues that might have been missed by other early adopters and developers, especially in the CD builds and installers.
  • If you are interested in this kind of testing, see the ISO information page for instructions on getting started with ISO testing and directions for using the test tracker.

  • The ISO Testing Walkthrough is a good place to start.

  • Also ensure you've joined the Ubuntu QA mailing list to know when the testing weeks are occurring.


ARM Testing

  • Additionally, if you have the hardware for it, we are actively helping push forward ubuntu onto ARM architectures. Check out the ARM pages for more information on testing ubuntu ARM images, including testing on a pandaboard.


Daily smoke Testing

  • Between milestones, it is also good (but not release critical) to test the daily ISOs and as many applications as possible. This kind of testing can be done in a Virtual Machine or on spare hardware.


Stable Release Update (SRU) Testing

  • All stable release updates are first uploaded to the Proposed repository before they are released live. An important principle behind Ubuntu is that we should never make a working system worse; stable release updates need to be extensively tested to make sure that there are no regressions.

  • If you are interested in this kind of testing, there is a public Launchpad SRU Verificaton team that you can join as well as a fantastic SRU wiki page with helpful tools for finding bugs to work on.


Feature Testing

  • One of the objectives each cycle is to provide testing for all the major features in the release. We organize periodic calls for testing to focus on new features, such as a new video driver or a big change to an important tool like Software Center.
  • If this kind of testing interests you, please join the Ubuntu QA mailing list. You will receive periodic emails about calls for testing a specific feature and how to participate.


General Testing

  • Ubuntu is more than its default installation and basic tasks - it's an entire repository of software and possible configurations. We need to test as much of it as possible, and some things aren't well covered by established test cases and procedures. General testing is as simple as attempting to use the development release and reporting whatever problems you run into as a bug.
  • If you are interested in this kind of testing, install the development version of ubuntu, and report and follow-up on any bugs you encounter. The mailing list and weekly QA meetings are another great place to discuss any bugs found that you feel may be of a more critical nature or have a widespread effect.

Application Testing

  • Application testing is the manual testing of specific things (test cases) in applications. Regression tests are specific tests for potential breakages from one release to another (they're also relevant for SRU testing, above).
  • If you are interested in this kind of testing, head to Packages QA Tracker and run through the application testcases and report your results. NOTE: The application testcases are currently in the process of being migrated during the quantal cycle.

  • Now Testing
    • Week of 15th November - 12.10 kernel 3.5.0.18 on 12.04 userspace.

Install Quantal kernel Prerequisites: Make sure you are running the latest version of precise, and all your packages are up to date

1) Add the X-team ppa

sudo add-apt-repository ppa:ubuntu-x-swat/q-lts-backport

2) Update apt and install the new kernel

sudo apt-get update && sudo apt-get install linux-image-generic-lts-quantal linux-headers-generic-lts-quantal

3) Restart your computer to boot into the new kernel

Uninstall Quantal kernel You may remove the 3.5 kernel by using ppa-purge

1) Install ppa-purge

sudo apt-get install ppa-purge

2) Remove the ppa

sudo ppa-purge ppa:ubuntu-x-swat/q-lts-backport

3) PPA Purge will find the packages installed and offered to downgrade them. Say yes and ppa-purge will remove the upgraded versions and reinstall the versions from the archive.

4) Remove the meta packages

sudo apt-get remove --purge linux-image-generic-lts-quantal linux-headers-generic-lts-quantal

5) Remove the kernel itself. While running the new kernel, enter the following command

uname -r

This returns a number, like '3.5.0-8-generic'. Use the number (3.5.0-8) to replace the word KERNEL

sudo apt-get remove --purge linux-image-KERNEL-generic linux-headers-KERNEL linux-headers-KERNEL-generic

sudo apt-get autoremove

Eg, for '3.5.0-8'

sudo apt-get remove --purge linux-image-3.5.0-8-generic linux-headers-3.5.0-8 linux-headers-3.5.0-8

Notes based upon experience

During periods of transition from testing one kernel to testing the next we get the error message: "Unable to locate linux-image-generic-lts-quantal." The answer is to wait a few days and then run the update command again. It also takes the qa tracker as few days to be updated to allow reporting for that new kernel image.

Bug reporting instructions


Automated Testing

  • Automated testing is the conversion of large numbers of test cases into simple scripts. They can often be run in bulk with a single command. UTAH is the main way of automating testcases for ubuntu. In addition, the program Checkbox also allows for automated test cases to be run and recorded.

  • If you are interested in this kind of testing, start by running checkbox. It is now installed by default. You can find it in the Dash by searching for System Testing. If you wish to go further and write UTAH test cases, you can visit the Automated Testing page for more information.

Autopilot Unity Testing

The unity team has built autopilot as a testing tool for unity. However, autopilot has broader applications beyond unity to help us do automated testing on a grander scale. Source: http://qa.ubuntu.com/ See blog posted 20th November 2012.

  • To Install Autopilot

sudo apt-get install python-autopilot unity-autopilot

  • Caution: Autopilot tests should be run in a Guest session. Any data saved during a guest session is lost when we log out of the guest session. To prevent the lost of test results.

1) do not log out of the test session.

2) instead Switch User to your normal user account.

3) use the gksudo command to load Gedit or Nautilus.

4) browse to the /tmp folder and look for the guest folder. That is where you will find any documents saved whilst using the guest session. The guest folder will have a name similar to guest-v4GnNa. This folder name will be different each time you run a guest session. The second part of the name will be different.

5) copy the data into a new documetn and save it. Now you will not loose the data when you log out of the guest session.

  • Autopilot Unity Tests

This command will list all the Unity tests available.

autopilot list unity

There are at present 461 Unity tests. We can run them as a single test. For example:

autopilot run unity.tests.test_dash.DashRevealTests.test_alt_f4_close_dash

Or, we can run all the tests at the same time with this command:

autopilot run unity

Caution: It will take a very long time to run all 461 Unity tests.

I have grouped the Unity tests into batches and I have added a command argument that will cause Autopilot to save the results into a log file. These files will be found in the /tmp/guest-###### folder. The log files will have a name that is based upon the 'computer-name_date_time.log' format. Therefore each log file will have a slightly different name that relates to the day and time that the Unity batch test was run. Autopilot will give the log file name when the test brings to run.

  • Autopilot Unity batch tests commands

Copy and paste each command into a terminal. The great the number of tests in a batch, the longer the test will run.

  • Batches with 1 test

autopilot run -o . unity.tests.test_dash.CategoryHeaderTests

autopilot run -o . unity.tests.test_dash.DashDBusIfaceTests

autopilot run -o . unity.tests.test_dash.DashLensBarTests

autopilot run -o . unity.tests.test_dash.DashSearchInputTests

autopilot run -o . unity.tests.test_dash.DashVisualTests

autopilot run -o . unity.tests.test_home_lens

autopilot run -o . unity.tests.test_ibus.IBusTestsAnthy

autopilot run -o . unity.tests.test_ibus.IBusTestsHangul

autopilot run -o . unity.tests.test_ibus.IBusTestsPinyin

  • Batches with 2 tests

autopilot run -o . unity.tests.test_dash.DashBorderTests

autopilot run -o . unity.tests.test_dash.DashCrossMonitorsTests

autopilot run -o . unity.tests.test_dash.DashKeyboardFocusTests

autopilot run -o . unity.tests.test_hud.HudAlternativeKeybindingTests

autopilot run -o . unity.tests.test_hud.HudCrossMonitorsTests

autopilot run -o . unity.tests.test_hud.HudLauncherInteractionsTests

autopilot run -o . unity.tests.test_ibus.IBusTestsAnthyIgnore

autopilot run -o . unity.tests.test_ibus.IBusTestsPinyinIgnore

autopilot run -o . unity.tests.test_switcher.SwitcherDetailsTests

autopilot run -o . unity.tests.xim.test_gcin.GcinTestHangul

  • batches with 3 tests

autopilot run -o . unity.tests.test_hud.HudLockedLauncherInteractionsTests

autopilot run -o . unity.tests.test_panel.PanelIndicatorEntryTests

autopilot run -o . unity.tests.test_spread.SpreadTests

autopilot run -o . unity.tests.test_switcher.SwitcherWindowsManagementTests

  • Batches with 4 tests

autopilot run -o . unity.tests.launcher.test_shortcut

autopilot run -o . unity.tests.launcher.test_visual

autopilot run -o . unity.tests.test_dash.DashLensResultsTests

autopilot run -o . unity.tests.test_dash.PreviewInvocationTests

autopilot run -o . unity.tests.test_ibus.IBusActivationTests

autopilot run -o . unity.tests.test_panel.PanelGrabAreaTests

autopilot run -o . unity.tests.test_showdesktop.ShowDesktopTests

autopilot run -o . unity.tests.test_switcher.SwitcherDetailsModeTests

autopilot run -o . unity.tests.test_switcher.SwitcherWorkspaceTests

autopilot run -o . unity.tests.test_unity_logging.UnityLoggingTests

  • Batches with 5 tests

autopilot run -o . unity.tests.launcher.test_capture

autopilot run -o . unity.tests.test_dash.DashMultiKeyTests

autopilot run -o . unity.tests.test_panel.PanelKeyNavigationTests

autopilot run -o . unity.tests.test_shortcut_hint.ShortcutHintTests

  • Batches with 6 tests

autopilot run -o . unity.tests.test_command_lens

autopilot run -o . unity.tests.test_dash.DashClipboardTests

autopilot run -o . unity.tests.test_shopping_lens.ShoppingLensTests

  • Batches with 7 tests

autopilot run -o . unity.tests.test_dash.PreviewNavigateTests

autopilot run -o . unity.tests.test_shortcut_hint.ShortcutHintInteractionsTests

  • Batches with 8 tests

autopilot run -o . unity.tests.launcher.test_reveal

autopilot run -o . unity.tests.test_panel.PanelCrossMonitorsTests

autopilot run -o . unity.tests.test_panel.PanelTitleTests

  • Batches with 9 tests

autopilot run -o . unity.tests.test_dash.DashKeyNavTests

autopilot run -o . unity.tests.test_panel.PanelHoverTests

autopilot run -o . unity.tests.test_quicklist.QuicklistActionTests

  • Batches with 10 tests

autopilot run -o . unity.tests.test_hud.HudVisualTests

autopilot run -o . unity.tests.test_panel.PanelMenuTests

  • Batches with 12 tests

autopilot run -o . unity.tests.launcher.test_icon_behavior

autopilot run -o . unity.tests.test_switcher.SwitcherTests

  • Batches with 13 tests

autopilot run -o . unity.tests.launcher.test_switcher

autopilot run -o . unity.tests.test_quicklist.QuicklistKeyNavigationTests

  • Batches with 14 tests

autopilot run -o . unity.tests.test_dash.DashRevealTests

  • Batches with 21 tests

autopilot run -o . unity.tests.launcher.test_keynav

  • Batches with 28 tests

autopilot run -o . unity.tests.test_hud.HudBehaviorTests

  • Batches with 30 tests

autopilot run -o . unity.tests.test_panel.PanelWindowButtonsTests

Some of these Unity batch tests finish very quickly because they only have one or two or three tests to run. Other batch tests will take a lot longer because the have 6 to 10 or more tests to run. The return of the terminal command prompt will tell you that the test has finished.

Tests that detect failures will take longer to finsh than those tests that do not fail. The failure of a test is not an indication of a fault in the hardware or even of a bug in Unity. At this stage of development the failure might be a result of a fault in the coding of the test.

  • Reporting Failure

Reports should be posted here: http://packages.qa.ubuntu.com/qatracker/milestones/246/builds/28425/testcases.

Others who are already pasting results are using http://paste.ubuntu.com/ or http://pastebin.com/ or some other type of paste bin to get their results to the QA Team. We can do the same by pasting the contents of each log file as will run the tests.

See here for examples of this method: http://packages.qa.ubuntu.com/qatracker/milestones/246/builds/28425/testcases/1466/results


Laptop Testing

  • Laptop Testing is about the manual testing of specific things (test cases) mainly related to laptops hardware, using milestone releases of the development version (alphas, betas and release candidates). The goal is to get Ubuntu to work great on as many different makes and models of laptops as possible and this can be done knowing which hardware works straight off the install CD and which hardware needs configuring or is poorly supported.

if you are interested in this kind of testing, head to the laptop testing Wiki page.