Revision 13 as of 2012-11-13 18:25:14

Clear message

Power Consumption

We have 7 tests, each runs a particular kind of way of exercising a system and each test produces 5 results:

  • average current drawn in mA (milli Amps)
  • maximum current drawn in mA
  • minimum current drawn in mA
  • standard deviation (in mA)
  • test duration in second.

Each test runs for either a specified number of power measurements (which I call "samples") or until a task is completed. Some tests, such as an idle system run for a fixed time, we are just interested in the average current drawn over time. However, some tests such as I/O activity run until a specific amount of data is copied, so this way we can see if we improve performance at the cost of power consumed. For example, we may get a job run faster *and* it uses the same power per second, which means the total power consumed is less too. Or, we may find a job runs faster and uses more power per second, and so the total power consumed is more.

Each set of results from a test is based on taking multiple samples from a high-precision Fluke multimeter and then calculate the average, min, max and standard deviation for the samples gathered. The standard deviation allows us to get an idea of how variable the sampled data is, and hence how confident we are in the accuracy of the results. For example, we may have a test that produces a bad average result, but we can see the the standard deviation is high, so we know that the data is not very accurate for that test.

So, we're really interested in the average measurement as this shows the average current drawn when running a test. However, it is also good to track the min and max values to see what the upper and lower bounds are. We also have the standard deviation which indicates how reliable the results are for this test. And we also have the duration of the test, which allows us to see if a test is slowing down or getting faster each time we run it.