ARMUIandTestHeads

Revision 7 as of 2010-05-27 06:23:17

Clear message

Summary

The goal is to enable the easy creation of test images containing UI/Test heads and benchmarks for use by the hardware vendors and UI developers.

This entails deciding which UI/Test heads and benchmarks will be made available, ensuring they are in the Maverick archive and providing a hassle-free way of creating test images containing any combination of versions of UI and HW enablement components.

We also want to provide the ability to automatically execute benchmarks and record the results.

Rationale

The quality of the final user experience depends on the quality of HW enablement components (provided by hardware/driver vendors), User Interface components (provided by UI and toolkit developers) and of course the integration of the two parts.

Both HW vendors and UI developers need a hassle-free way to test and benchmark their components independently and also the integration with the other parts.

By providing an easy way to build images containing benchmarks and any combination of stable and latest releases of components from both parties, we provide a convenient way to track progress, identify and solve issues as early as possible and provide a showcase of the platform.

User stories

  • Vendor engineers want to try the ubuntu-on-arm platform on their hardware/drivers. Using the provided tools they create an image containing the UI/Test combined with HW enablement components either from the main archive or other sources (eg private PPAs). They use this on their hardware and have a usable Linux system with a UI which they can use as a showcase.
  • The engineers now want to check how well their hardware/drivers perform (with regards to the user experience). They run the provided benchmark suite and get an overall view of the performance of the system. They publish the results of the benchmarks so that they are available to other interested parties from within the project (eg UI developers).
  • The vendor engineers want to try a new version of their HW enablement components. They create a new image containing the updated version while keeping a stable version of the UI components and run the benchmarks again. They compare the new results with older ones to spot any regressions and to verify improvements.
  • The vendor engineers want to try the latest version of the UI components. They create an image containing the latest/unstable version of the UI and run benchmarks. They find that there are serious problems with the latest UI components and report the issue. They can still try out the platform by building images using the stable/working versions of the UI components.
  • The UI/toolkit developers are notified of the issue with their latest version and get the published benchmarks to spot the problem. They (hopefully) identify and fix the cause of the issue and update their components in the archive.

Design

  • The UI/Test heads and benchmarks should test the platform at both a low (driver, library) and a high (user-experience) level. The vendors need the low level tests so that they can easily pin-point regressions in specific components. The vendors need the high-level tests so that they can test the effect of their components on the overall user-experience.
  • The results of the benchmarks should be accompanied by enough versioning information so that vendors/developers can perform meaningful performance comparisons. It would be useful if the results were depicted graphically, so that improvements or regressions could be easily spotted.
  • It should be easy to combine stable and latest versions from both HW enablement components and UI components when creating an image.

Benchmarks

  • Decide which benchmarks to include and make sure a benchmark suite is available in the archive.
  • Work with the QA infrastructure team to create or reuse some framework so that the benchmarks results can be saved, displayed and optionally uploaded to a central server.

UI/Test heads

  • Create metapackages/tasks for various UI/Test heads (minimal, Netbook, Chromium OS) in the archive.
  • Work with infrastructure team to find an easy way for the vendors to create images containing arbitrary versions of HW and UI components (not just the latest).

Implementation

Benchmarks

  • Benchmark list
    • 2D
      • x11perf: simple X11 tests (most of them mostly useless for real world apps)
        • Make sure is packaged and working
      • gtkperf: simple GTK+ tests
        • Make sure is packaged and working
      • render_bench: XRENDER benchmarks
        • Make sure is packaged and working
      • qgears2: QT vector drawing benchmarks (using various backends Image, Render, OpenGL)
        • Make sure is packaged and working
      • cairo-trace, cairo-perf-trace: allows recording and playback of cairo traces. See: http://cairographics.org/FAQ/#profiling

        • Make sure is packaged and working
        • Create sample traces so vendors can automatically benchmark against them.
    • 3D
      • glxgears: Just for starters
        • Port to OpenGL ES 2.0 (should be easy)
        • Make sure is packaged and working
      • Nehegles: http://maemo.org/packages/view/nehegles/

        • Select some of these and turn them into benchmarks.
        • Make sure they are packaged and working.
      • Clutter benchmarks
        • Make sure a Clutter OpenGL ES 2.0 package is available and working.
        • Make sure Clutter benchmarks are packaged and working.
        • Investigate if/how we can get and play back Clutter traces, so we can get performance information for common use cases.
    • Web browsing/javascript
  • Benchmark reporting (working with QA infrastructure)
    • Create (or reuse eg Phoronix) and package a tool that can run a series of benchmarks and create a report, including detailed package versioning information.
    • Investigate the possibility of uploading reports to a central server.
    • Graphical tools for viewing a series of benchmark reports.

UI/Test heads

  • UI/Test heads:
    • minimal UI/Test head containing a lightweight window manager, a Webkit based browser, GTK+, QT, gstreamer
      • Make sure it is packaged (eg as a task) and working.
    • Ubuntu Netbook UI/Test head with Unity based UI (needs OpenGL ES support)
      • Make sure it is packaged (eg as a task) and working.

For other Specs

  • Create a tool (or enhance existing ones) to allow the creation of images containing arbitrary versions of HW and UI components
    • See arm-m-image-building
  • Chromium OS test head (if there is demand)
    • Make sure it is packaged (eg as a task) and working.
    • make a new spec!!

Test/Demo Plan

  • Use the specified tools/procedures to build images and run sample benchmarks on our own boards.

Unresolved issues

BoF agenda and discussion

Goal

Run the the top (UI) layer of various mobile platforms on ubuntu-on-arm.

Benefits

* Showcase the ubuntu-on-arm platform * Test the ubuntu-on-arm platform (what's (not) working, what's missing) * Make it easy for vendors to try out and benchmark the platform.

  • gtkperf
  • phoronix
  • webkit
  • Choice of UI test heads and feasibility:
    • Android
    • Chromium OS
    • Limo
    • Meego
    • ubuntu netbook
    • phone profile
  • Feasibility:
    • Some test heads require GLES that we don't currently provide.
    • Can't ship OpenGL for all vendors in a single image (filename clashes), need to allow building a custom private image with private opengl bits to test the
  • Vendor needs:
    • Need a way to do comparative benchmarking between machines running these various stacks
    • Need both automated and human benchmarking.
    • What do the hardware vendors actually care about to get from Ubuntu on ARM as to do their testing?
    • Easy way to combine test head images with rest of the stack.
    • Stable UI test heads (with ability to get new versions easily).
    • Minimal UI test head for basic performance tests (eg QT or ubuntu netbook for 2D).
    • Easy way for vendor to submit performance results.
  • Tracking
    • May be able to extend / use part of ISO tracker to record tests performed against test heads
    • Close cooperation between Unity/Clutter and driver teams (sharing bugs).
  • How to get performance information:
    • Add instrumentation to Unity, to easily track performance.
    • Get performance information from Clutter (high-level user experience information).
    • Low level instrumentation at driver level.

Actions

  • Minimal images with basic 2D and 3D benchmarks, javascript, web rendering.
  • Full test head with Unity.
  • Chromium OS test head if there is a demand for it.
  • Document the action plan and send it to SoC vendors for feedback.


CategorySpec