Testing

Differences between revisions 1 and 13 (spanning 12 versions)
Revision 1 as of 2010-06-07 18:13:49
Size: 308
Editor: 79-70-9-125
Comment:
Revision 13 as of 2012-11-13 18:25:14
Size: 1997
Editor: brad-figg
Comment:
Deletions are marked like this. Additions are marked like this.
Line 3: Line 3:
= Calls for Testing = === Power Consumption ===
Line 5: Line 5:
 * [[KernelTeam/Grub2Testing]] -- testing of the updated grub2 bootloader for x86
 * [[KernelTeam/InstallonUSBkey]] -- Install ubuntu on USB key, used for testing.
 * [[KernelTeam/KernelCompatibilityTesting]] -- Ubuntu kernel compatibility testing.
We have 7 tests, each runs a particular kind of way of exercising a system
and each test produces 5 results:

 . average current drawn in mA (milli Amps)
 . maximum current drawn in mA
 . minimum current drawn in mA
 . standard deviation (in mA)
 . test duration in second.

Each test runs for either a specified number of power measurements
(which I call "samples") or until a task is completed. Some tests, such
as an idle system run for a fixed time, we are just interested in the
average current drawn over time. However, some tests such as I/O
activity run until a specific amount of data is copied, so this way we
can see if we improve performance at the cost of power consumed. For
example, we may get a job run faster *and* it uses the same power per
second, which means the total power consumed is less too. Or, we may
find a job runs faster and uses more power per second, and so the total
power consumed is more.

Each set of results from a test is based on taking multiple samples
from a high-precision Fluke multimeter and then calculate the average, min, max
and standard deviation for the samples gathered. The standard deviation
allows us to get an idea of how variable the sampled data is, and hence
how confident we are in the accuracy of the results. For example, we
may have a test that produces a bad average result, but we can see the
the standard deviation is high, so we know that the data is not very
accurate for that test.

So, we're really interested in the average measurement as this shows the
average current drawn when running a test. However, it is also good to
track the min and max values to see what the upper and lower bounds are.
We also have the standard deviation which indicates how reliable the
results are for this test. And we also have the duration of the test,
which allows us to see if a test is slowing down or getting faster each
time we run it.

Power Consumption

We have 7 tests, each runs a particular kind of way of exercising a system and each test produces 5 results:

  • average current drawn in mA (milli Amps)
  • maximum current drawn in mA
  • minimum current drawn in mA
  • standard deviation (in mA)
  • test duration in second.

Each test runs for either a specified number of power measurements (which I call "samples") or until a task is completed. Some tests, such as an idle system run for a fixed time, we are just interested in the average current drawn over time. However, some tests such as I/O activity run until a specific amount of data is copied, so this way we can see if we improve performance at the cost of power consumed. For example, we may get a job run faster *and* it uses the same power per second, which means the total power consumed is less too. Or, we may find a job runs faster and uses more power per second, and so the total power consumed is more.

Each set of results from a test is based on taking multiple samples from a high-precision Fluke multimeter and then calculate the average, min, max and standard deviation for the samples gathered. The standard deviation allows us to get an idea of how variable the sampled data is, and hence how confident we are in the accuracy of the results. For example, we may have a test that produces a bad average result, but we can see the the standard deviation is high, so we know that the data is not very accurate for that test.

So, we're really interested in the average measurement as this shows the average current drawn when running a test. However, it is also good to track the min and max values to see what the upper and lower bounds are. We also have the standard deviation which indicates how reliable the results are for this test. And we also have the duration of the test, which allows us to see if a test is slowing down or getting faster each time we run it.

Kernel/Testing (last edited 2012-11-13 18:29:11 by brad-figg)