ARMUIandTestHeads

Differences between revisions 3 and 4
Revision 3 as of 2010-05-18 14:05:41
Size: 4941
Editor: 5ac884b8
Comment: Add notes from UDS-M session
Revision 4 as of 2010-05-19 15:11:57
Size: 6609
Editor: 62
Comment: Updated "User stories"
Deletions are marked like this. Additions are marked like this.
Line 41: Line 41:

Vendor engineers want to try the ubuntu-on-arm platform on their
hardware/drivers. Using the provided tools they create an image containing the
UI/Test combined with HW enablement components either from the main archive or
other sources (eg private PPAs). They use this on their hardware and have a
usable Linux system with a UI which they can use as a showcase.

The engineers now want to check how well their hardware/drivers perform (with
regards to the user experience). They run the provided benchmark suite and get
an overall view of the performance of the system. They publish the results of
the benchmarks so that they are available to other interested parties from
within the project (eg UI developers).

The vendor engineers want to try a new version of their HW enablement
components. They create a new image containing the updated version while
keeping a stable version of the UI components and run the benchmarks again.
They compare the new results with older ones to spot any regressions and to
verify improvements.

The vendor engineers want to try the latest version of the UI components. They
create an image containing the latest/unstable version of the UI and run
benchmarks. They find that there are serious problems with the latest UI
components and report the issue. They can still try out the platform by
building images using the stable/working versions of the UI components.

The UI/toolkit developers are notified of the issue with their latest version
and get the published benchmarks to spot the problem. They (hopefully) identify
and fix the cause of the issue and update their components in the archive.

Summary

The goal is to enable the easy creation of test images containing UI/Test heads and benchmarks for use by the hardware vendors and UI developers.

This entails deciding which UI/Test heads and benchmarks will be made available, ensuring they are in the Maverick archive and providing a hassle-free way of creating test images containing any combination of versions of UI and HW enablement components.

We also want to provide the ability to automatically execute benchmarks and record the results.

Release Note

Is this needed?

Rationale

The quality of the final user experience depends on the quality of HW enablement components (provided by hardware/driver vendors), User Interface components (provided by UI and toolkit developers) and of course the integration of the two parts.

Both HW vendors and UI developers need a hassle-free way to test and benchmark their components independently and also the integration with the other parts.

By providing an easy way to build images containing benchmarks and any combination of stable and latest releases of components from both parties, we provide a convenient way to track progress, identify and solve issues as early as possible and provide a showcase of the platform.

User stories

Vendor engineers want to try the ubuntu-on-arm platform on their hardware/drivers. Using the provided tools they create an image containing the UI/Test combined with HW enablement components either from the main archive or other sources (eg private PPAs). They use this on their hardware and have a usable Linux system with a UI which they can use as a showcase.

The engineers now want to check how well their hardware/drivers perform (with regards to the user experience). They run the provided benchmark suite and get an overall view of the performance of the system. They publish the results of the benchmarks so that they are available to other interested parties from within the project (eg UI developers).

The vendor engineers want to try a new version of their HW enablement components. They create a new image containing the updated version while keeping a stable version of the UI components and run the benchmarks again. They compare the new results with older ones to spot any regressions and to verify improvements.

The vendor engineers want to try the latest version of the UI components. They create an image containing the latest/unstable version of the UI and run benchmarks. They find that there are serious problems with the latest UI components and report the issue. They can still try out the platform by building images using the stable/working versions of the UI components.

The UI/toolkit developers are notified of the issue with their latest version and get the published benchmarks to spot the problem. They (hopefully) identify and fix the cause of the issue and update their components in the archive.

Assumptions

Design

You can have subsections that better describe specific parts of the issue.

Implementation

This section should describe a plan of action (the "how") to implement the changes discussed. Could include subsections like:

UI Changes

Should cover changes required to the UI, or specific UI that is required to implement this

Code Changes

Code changes should include an overview of what needs to change, and in some cases even the specific details.

Migration

Include:

  • data migration, if any
  • redirects from old URLs to new ones, if any
  • how users will be pointed to the new way of doing things, if necessary.

Test/Demo Plan

It's important that we are able to test new features, and demonstrate them to users. Use this section to describe a short plan that anybody can follow that demonstrates the feature is working. This can then be used during testing, and to show off after release. Please add an entry to http://testcases.qa.ubuntu.com/Coverage/NewFeatures for tracking test coverage.

This need not be added or completed until the specification is nearing beta.

Unresolved issues

This should highlight any issues that should be addressed in further specifications, and not problems with the specification itself; since any specification with problems cannot be approved.

BoF agenda and discussion

Goal

Run the the top (UI) layer of various mobile platforms on ubuntu-on-arm.

Benefits

* Showcase the ubuntu-on-arm platform * Test the ubuntu-on-arm platform (what's (not) working, what's missing) * Make it easy for vendors to try out and benchmark the platform.

  • gtkperf
  • phoronix
  • webkit
  • Choice of UI test heads and feasibility:
    • Android
    • Chromium OS
    • Limo
    • Meego
    • ubuntu netbook
    • phone profile
  • Feasibility:
    • Some test heads require GLES that we don't currently provide.
    • Can't ship OpenGL for all vendors in a single image (filename clashes), need to allow building a custom private image with private opengl bits to test the
  • Vendor needs:
    • Need a way to do comparative benchmarking between machines running these various stacks
    • Need both automated and human benchmarking.
    • What do the hardware vendors actually care about to get from Ubuntu on ARM as to do their testing?
    • Easy way to combine test head images with rest of the stack.
    • Stable UI test heads (with ability to get new versions easily).
    • Minimal UI test head for basic performance tests (eg QT or ubuntu netbook for 2D).
    • Easy way for vendor to submit performance results.
  • Tracking
    • May be able to extend / use part of ISO tracker to record tests performed against test heads
    • Close cooperation between Unity/Clutter and driver teams (sharing bugs).
  • How to get performance information:
    • Add instrumentation to Unity, to easily track performance.
    • Get performance information from Clutter (high-level user experience information).
    • Low level instrumentation at driver level.

Actions

  • Minimal images with basic 2D and 3D benchmarks, javascript, web rendering.
  • Full test head with Unity.
  • Chromium OS test head if there is a demand for it.
  • Document the action plan and send it to SoC vendors for feedback.


CategorySpec

Specs/M/ARMUIandTestHeads (last edited 2010-05-30 03:43:14 by 65)