ErrorTracker

Differences between revisions 2 and 68 (spanning 66 versions)
Revision 2 as of 2011-07-05 10:31:57
Size: 3276
Editor: mpt
Comment: rationale and cases to sketch
Revision 68 as of 2012-06-14 13:54:26
Size: 26913
Editor: mpt
Comment: + "Accessing previous reports"
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
## page was renamed from CrashTracker

<<TableOfContents()>>
Line 3: Line 7:
To help Ubuntu reach a standard of quality similar to competing operating systems, developers should spend less time asking for information on individual bug reports, and more time fixing those bugs that affect users most often.

To determine which bugs those are, we should collect crash reports from as many people as possible, before and after release. This means ''not'' requiring them to sign in to any Web site, enter any text, submit hundreds of megabytes of data, receive e-mail, or do anything more complicated than clicking a button. An automated system should then analyze which problems are caused by the same bug. If developers need more information about a particular kind of crash, they should be able to configure the system to automatically retrieve that information when the problem next occurs.
To help Ubuntu reach a standard of quality similar to competing operating systems, developers need to know the answers to two questions:

 1. '''How reliable is Ubuntu right now?''' (Compared with yesterday, compared with the previous version, or compared with what it would be if everyone had installed every update.)

 2. '''What’s the best thing I can do right now to help improve its quality?'''

We can better answer both of those questions if we collect '''all the information we need''', for as '''many types of problems''' as we can, from '''a large representative sample''' of people. This means not requiring people to sign in to any Web site, enter any text, submit hundreds of megabytes of data, receive e-mail, or do anything more complicated than clicking a button. It means collecting problem reports both before and after release. And it means analyzing and bucketing problems automatically, with developers able to configure the system to automatically retrieve more information about a particular kind of problem when it next occurs.
Line 9: Line 17:
=== Prior art ===

Windows Error Reporting is perhaps the most advanced crash reporting system. As described in [[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.148.716&rep=rep1&type=pdf|K Glerum, K Kinshumann, S Greenberg, et al.: “Debugging in the (very) large: Ten years of implementation and experience”]] (PDF), it uses progressive data collection where developers can request more than the “minidump” if necessary to understand particular problems, and automatically notifies users if a software update fixes their problem. Hardware vendors can [[http://msdn.microsoft.com/en-us/windows/hardware/gg487440|see crash reports specific to their hardware]].

{{attachment:windows-app-progress.gif}} {{attachment:windows-app.gif}} {{attachment:windows-os.png}}

Mac OS X has a Crash``Reporter system that submits crash data to Apple. As described in [[http://developer.apple.com/library/mac/technotes/tn2004/tn2123.html|Technical Note TN2123]], “There is currently no way for third party developers to access the reports submitted via CrashReporter”.
The client interface for the error tracker also serves a purpose which is less important for developers, but ''more'' important for end users: '''explaining why something weird just happened'''. In previous Ubuntu release versions, when most programs crashed there was no explanation of why the window had disappeared.

== Client design ==

<<Anchor(settings)>>
=== Privacy settings ===

The “Security & Privacy” panel in Ubuntu 12.10 and later should contain a “Diagnostics” tab for error and metrics collection, expanding on the tab in Ubuntu 12.04.

{{attachment:settings-privacy-diagnostics.png}}

In Ubuntu 11.10 and earlier, a standalone “Privacy” window should be backported containing equivalent controls for just the error collection.

{{attachment:settings-privacy-old-versions.png}}

In both cases, the “People using this computer can…” and following controls should be insensitive whenever you have not unlocked them as an administrator.

In a new Ubuntu installation (or an upgrade to a version that introduces these settings), “Send error reports to Canonical” should be checked by default. But “Send a report automatically if a problem prevents login” and “Send occasional system information to Canonical”, when present, should be unchecked by default.

<<Anchor(error)>>
=== When there is an error ===

When there is an error that prevents login, and “Send a report automatically if a problem prevents login” is checked, the error should be sent automatically.

As soon as possible after any other type of error occurs, an alert should appear with text and buttons depending on the situation. The Esc and Enter keys should ''not'' do anything in these alerts, because you may have been just about to press one of those in the program that has the problem.

|| ||<v>'''You are an admin, or error reporting is allowed'''||<v>'''Your admin has blocked error reporting'''||<v>'''Implemented in Ubuntu'''||
||<^><<Anchor(os-crash)>>'''An OS package crashes''' for the first time this version<<BR>>,,Test case: sudo pkill -SEGV zeitgeist,,||<^>{{attachment:os-error-reportable.png}}||<^>{{attachment:os-error-unreportable.png}}||12.04||
||<^>'''An OS package crashes''' a subsequent time||<^>{{attachment:os-error-reportable-subsequent.png}}||<^>{{attachment:os-error-unreportable-subsequent.png}}||12.04||
||<-3 style="border:none;">“Ignore future problems of this type” means ignore future crashes of the same version of the same package.||
||<^><<Anchor(thread)>>'''An application thread crashes''' for the first time this version||<^>{{attachment:app-thread-reportable.png}}||<(|2>(no alert shown)||<(|2>(in 12.04, shows “closed unexpectedly” error instead?)||
||<^>'''An application thread crashes''' a subsequent time||<^>{{attachment:app-thread-reportable-subsequent.png}}||
||<-3 style="border:none">For most other error types, the alert shouldn’t offer to be silent next time — because it still needs to appear to explain what’s happened, and (in the application hang case) to let you stop/relaunch the application:||
||<^><<Anchor(app-requested)>>'''An application has a developer-specified error'''||<^>{{attachment:app-requested-reportable.png}}||<^>{{attachment:app-requested-unreportable.png}}||
||<^><<Anchor(app-hang)>>'''An application hangs'''<<BR>>,,Test case: eog & sleep 5 && pkill -STOP eog && sleep 20 && pkill -CONT eog,,||<^>{{attachment:app-hang-reportable.png}}||<^>{{attachment:app-hang-unreportable.png}}||(targeted for 12.10)||
||<^><<Anchor(app-crash)>>'''An application crashes'''<<BR>>,,Test case: eog & pkill -SEGV eog,,||<^>{{attachment:app-crash-reportable.png}}||<^>{{attachment:app-crash-unreportable.png}}||12.04||
||<^><<Anchor(kernel-oops)>>'''Ubuntu restarts after a kernel oops'''||<^>{{attachment:kernel-oops-reportable.png}}||<^>{{attachment:kernel-oops-unreportable.png}}||(targeted for 12.04 SRU)||
||<^><<Anchor(kernel-oops)>>'''A package fails to install or update'''||<^>{{attachment:package-error-reportable.png}}||<^>{{attachment:package-error-unreportable.png}}||(targeted for 12.04 SRU)||
||<^><<Anchor(debconf)>><<Anchor(debconf-string)>>'''A Debconf “string” prompt'''||<^>{{attachment:debconf-string-reportable.png}}||<^>{{attachment:debconf-string-unreportable.png}}||(targeted for 12.10)||
||<^><<Anchor(debconf-boolean)>>'''A Debconf “boolean” prompt'''||<^>{{attachment:debconf-boolean-reportable.png}}||<^>{{attachment:debconf-boolean-unreportable.png}}||(targeted for 12.10)||
||<^><<Anchor(debconf-select)>>'''A Debconf “select” prompt'''||<^>{{attachment:debconf-select-reportable.png}}||<^>{{attachment:debconf-select-unreportable.png}}||(targeted for 12.10)||
||<^><<Anchor(debconf-multiselect)>>'''A Debconf “multiselect” prompt'''||<^>{{attachment:debconf-multiselect-reportable.png}}||<^>{{attachment:debconf-multiselect-unreportable.png}}||(targeted for 12.10)||
||<^><<Anchor(debconf-note)>>'''A Debconf “note” prompt'''||<^>{{attachment:debconf-note-reportable.png}}||<^>{{attachment:debconf-note-unreportable.png}}||(targeted for 12.10)||
||<^><<Anchor(debconf-text)>>'''A Debconf “text” prompt'''||<^>{{attachment:debconf-text-reportable.png}}||<^>{{attachment:debconf-text-unreportable.png}}||(targeted for 12.10)||
||<^><<Anchor(debconf-password)>>'''A Debconf “password” prompt'''||<^>{{attachment:debconf-password-reportable.png}}||<^>{{attachment:debconf-password-unreportable.png}}||(targeted for 12.10)||
||<-3 style="border:none">But with non-application software crashing, we can’t tell programmatically whether it’s something you need to care about or not. So if you aren’t going to report the errors, we might as well let you ignore future errors:||
||<^><<Anchor(non-app-crash)>>'''Third-party non-application software crashes''' for the first time this version<<BR>>,,Test case: sh -c 'kill -SEGV $$',,||<^>{{attachment:nas-crash-reportable.png}}||<(|2>(no alert shown)||12.04||
||<^>'''Third-party non-application software crashes''' a subsequent time||<^>{{attachment:nas-crash-reportable-subsequent.png}}||12.04||
||<-3 style="border:none">For all cases where the “Send an error report to help fix this problem” checkbox is present, its state should persist across errors and across Ubuntu sessions.||
||<^ style="border:none"><<Anchor(details)>>If you choose “Show Details”, it should change to “Hide Details” while a text field containing the error report appears below the primary text.<<BR>><<BR>>If necessary, a spinner and the text “Collecting information…” should appear centered inside the text field while the information is collected (other than the process name and version, which should appear instantly), pausing whenever the collection system is waiting for you to answer any questions.||{{attachment:app-crash-reportable-details.png}}||
||<-3 style="border:none">If you choose to send an error report, the alert should disappear immediately. Data should be collected (if it hasn’t been already), and reports should be sent in the background, with ''no'' progress or success/failure feedback. If you are not connected to the Internet at the time, reports should be queued. Any queued reports should be sent when you next agree to send an error report while online.||
||<^ style="border:none">If you are using a pre-release version of Ubuntu, and the error report matches an existing Launchpad bug report, a further alert box should appear explaining its status and letting you open the bug report.||{{attachment:bug-report.png}}<<BR>>,,Enter = “OK”,,|| ||(targeted for 12.10)||

'''''Future work:''' Ensure that if there is a delay in displaying a crash, we adjust the text of the dialog to reflect this. As an example, if X crashes and the user has to log in again or reboot the computer.''

'''''Future work:''' If a software update is known to fix the problem, replace the primary alert with [[SoftwareUpdates#alert|the software update alert]] (or progress window, depending on the update policy), with customized primary text. Or point them at a web page (not a wiki page!) with details if a workaround exists, but no fix is available yet.''

'''''Future work:''' Automate the communication with the user to facilitate things like leak detection in subsequent runs, without requiring additional interaction with the user. Our current process requires us to ask people who are subscribed to the bug to try a specially-instrumented build, with a traditionally very long feedback loop between the developer and the bug subscribers. We should make it entirely automatic. Just wait for the next user who sees the bug to click one "yes, I'd like to help make this product better" button.''

<<Anchor(multiple)>>
=== When there are multiple simultaneous errors ===

To guard against the case where multiple errors of the same type cause a flood of alert boxes, there should be '''aggregate alert boxes''' for the two most likely cases, internal errors and application crashes.

If an alert box for a single error is open and unfocused, when another error of the same type happens, that alert box should morph into the aggregate version.

||<^>'''Multiple OS packages crash'''||<^>{{attachment:os-error-reportable-multiple.png}}|| ||(?)||
||<^>'''Multiple applications crash'''||<^>{{attachment:app-crash-reportable-multiple.png}}|| ||(?)||

In these cases, the “Show Details” box should show details of all the errors, with a separator between them.

<<Anchor(metrics)>>
=== Invitation for metrics collection ===

For any administrator, after the ''first'' time only that they respond to an error alert, a second alert should appear to invite them to opt in to metrics collection. (The “Esc” key should activate “Don’t Send” in this alert, but the “Enter” key should not do anything.)

{{attachment:privacy-settings-alert.png}}

The “Privacy…” button should open System Settings to the Privacy panel. Choosing “Send” should be equivalent to checking “Send occasional system information to Canonical” in the Privacy settings.

== Client implementation ==

The apport client will write a .upload file alongside a .crash file to indicate that the crash should be sent to the crash database. A small C daemon (currently "whoopsie", previosuly "reporterd") will set up an inotify watch on the /var/crash directory, and any time one of these .upload files appears, it will upload the .crash file. It will do this if and only if there is an active Internet connection, as determined by watching the NetworkManager DBus API for connectivity events, otherwise it will add it to a queue for later processing.

We will ensure NetworkManager brings up the interfaces as early as possible, to enable us to file crash reports during boot.

This needs to be a daemon, rather than another path of the apport client code, to account for there not being an Internet connection at the time of the crash and for crashes during boot, when we cannot assume the user will get back to a known-good state to file the report.

The canonical example here is the scenario posed in Microsoft’s Windows Error Reporting paper, where a piece of malware was causing the core desktop application (explorer.exe) to crash. They were still able to receive crash reports, as their client software still submitted reports very early on in the boot process.

The apport crash file will be parsed into an intermediate data structure (currently a GHashTable), with the core dump stripped out, and then converted into BSON to be transmitted in a HTTP POST operation. The server will reply with a UUID for subsequent operations and, optionally, a command for further action. Initially, this will just be a command to upload the core dump.

A new field is being added to the apport crash file, StacktraceAddressSignature. The server will check for this field, and if it already has a retraced core dump generated from the same signature, it will reply with just the UUID of the crash report entry in the database, indicating that a core dump need not be submitted.

If, however, the server does reply with a request to upload the core dump, it will be sent as zlib compressed data in an HTTP POST operation.

The URLs for posting will be of the form:
 - http://crashes.ubuntu.com/submit
 - http://crashes.ubuntu.com/550e8400-e29b-41d4-a716-446655440000/submit-core

Crash reports will be cleaned up after 14 days, as the system may never be connected to the Internet.

If the reporter daemon crashes, it will write a crash file like any other application. Its upstart job will have the respawn flag set, and a limit put in place so it doesn't go crazy.

If the reporter daemon moves to using apport-unpack to process the crash files, it should gracefully handle -ENOSPC.

Crash reports for applications not themselves part of packages in the Ubuntu will be handled. These will not be retraced, but they will be collected for statistical analysis. This removes the "the problem cannot be reported" dialog in Apport.

We will add an Origin and possibly a Site field to the apport reports, using the python-apt candidate.origins interface. This will allow us to answer questions like what percentage of crashes are coming from PPAs. More importantly, it will let us focus reports on packages from a particular PPA, like the unity-testing one.

== Accessing previous reports ==

Choosing “Show Previous Reports” in [[#settings|the settings interface]] should open a Web page listing those reports.

{{attachment:previous-reports.png}}

To avoid end users getting lost in developer material, the page should have no global navigation.

To avoid privacy problems, it should be impossible to share the URL of the page. ''How?''

Error reports should be listed in the order they were received, newest first, defaulting to the newest 50. The date received should link to the individual report.

If there are from 1 to 50 reports, the batch count should read only “Showing all {number}”, and there should be no batch navigation.

{{attachment:previous-reports-1-batch.png}}

If there are no reports at all, there should be no batch count, navigation, or table — just an explanatory sentence.

{{attachment:previous-reports-none.png}}

== Server design ==
[[/ServerArchitecture]] has additional details.

We will use Robert Collins’ oops-repository as the foundation for our crash database. It has been suggested that this can meet Launchpad’s crash reporting requirements, scaling to a high volume of reports (e.g. 1M/day). We will also use the OOPS dictionary format for our crashes.

This will make integrating with Launchpad’s longer-term plans of this as a service for all projects an easier challenge. Launchpad’s offering may be implemented as one big Cassandra cluster in a multi-tenant fashion, or on a per-project basis, feeding to an API.

oops-repository will also provide the API for interacting with the database. This will include operations to post a new crash and potentially ask for more information, upload additional information (such as the core dump), get the full data for a crash out (a privileged operation), and update an existing crash report (a partially privileged operation) with the retraced data.

We will build a small Django web user interface for management functions on top of this API. The initial implementation will not allow regular developers to access the crash data, as we will not have time in this cycle to address the security concerns around this. Canonical IS will be the interim arbiter of who is able to access this system, inclusive of at least the release manager.

We will also evaluate Mozilla’s Socorro, to see if it requires less work to meet our longer-term needs, but this will be done as time allows.

=== Retracing ===

When a new core dump is submitted to the crash database, it will be written to a SAN and the UUID will be added to a RabbitMQ queue for the matching architecture. The queue will also be written in Cassandra, in the event the RabbitMQ service fails.

Retracing daemons for each architecture will pull UUIDs off their respective RabbitMQ queues, get the core dump for the UUID from Cassandra, then feed it through apport-retrace.

When a complete trace is generated, it will be added as a row in the crash column family for the relevant UUID. It will also be added to an index column family where the key is the crash signature (StacktraceAddressSignature) and the value is the UUID in the crash column family. In the future, we may expand this to a more complex bucketing algorithm, as necessary.

The retracing daemon systems will each keep a large cache of the debug symbol packages.

== Future work ==

Upstart has inotify job support on its roadmap. If this is implemented, it may allow us to move from an always-running daemon to something spawned by upstart itself as-needed.

The system could be designed either with one single central server instance, to which all error collecting tools for all projects submit data, or could be distributed to separate server instances for each project. There are pros and cons to each approach and it's unclear which is best. Having multiple servers provides flexibility, which could be particularly important for private project use cases, and might make it easier to roll out project-specific customizations or configurations.

Eventually, retracing will be moved entirely into the crash database and provide as a web service for Launchpad to consume. This will remove the need for submitting core dumps to Launchpad at all.

Launchpad will be mined for bugs that share the same signature as crashes in the database. These will be linked into the crash. Once this is in place, oops-repository will be modified to provide an "update available that fixes this issue" response when the respective bug is closed by an upload.

We will investigate using Datastax's Brisk/Enterprise with Pig or Hive to query over existing crash reports.

== Hardware information ==

Upon first successful connection to the Internet, the system will send a basic hardware profile, keyed on a SHA512 of the system UUID and a SHA512 of the DMI tables themselves.

This information will be submitted to one of the existing hardware databases. Queries will be possible across the crash database and hardware database. For example, it may be desireable to know what the top compiz crashes are for a particular piece of graphics hardware.

== Constant measurement ==

We will follow the “if it moves, measure it” principle from Etsy, and will employ the Twisted port of their popular StatsD daemon for collecting metrics.

Some examples of data points we may want to capture:
 - How long it takes to submit a crash?
 - How long does it take to retrace a crash?
 - The queue size of the retracer architecture pools.
 - The number of rows in each ColumnFamily.

As many Canonical projects are moving from Tuolumne to Graphite, we will follow suit and implement the graphing of these statistics on Graphite.

== Performance testing ==

A variety of performance tests will be constructed to validate the architecture of this service. We will answer questions like, “how long does it take to bring up 400 large core dumps and map/reduce over them?”

We will optimize for latency. We will ask Canonical IS’ load testing expert to review this system.

== General testing ==

We will have a complete set of unit tests for every part of this system, as well as system tests, using the Canonicloud to bring up test copies of the components.

We will maintain a staging server like Ubuntu One and Launchpad.

== Deployment ==

Core dump reporting will not be enabled when the service is first deployed, to test the scalability of the overall system.

A fractional deployment strategy will be crafted, using a time-based, random, or machine fingerprint key to determine whether the reporting system should begin submitting crash reports.

Once the system is running effectively on a released version of Ubuntu, the client will be backported to the previous version of Ubuntu. If that undertaking is successful, it will then be backported to the previous LTS.

== Developer client ==

The data will be reported on via the http://daisy.ubuntu.com backend server.

The developer client program views the data stored in the backend server, which package maintainers, upstream developers, and other interested technical folk can use to interact with the data. This should include:

 * Graphs
 * Tables
 * Detail views of particular error instances
 * Querying "Which crash reports are related to this bug?"
 * Statistics
    * "Top Changers" for spotting issues early
    * "Rate of crashes per user"

The backend provides a strong API for retrieving data of interest. This permits construction of adhoc queries, custom analysis, etc. beyond the client program's capabilities (such as automated scripting).

== Requirements ==

 * Must be attentive to privacy issues [Need to elaborate further on this]

 * A collection of files are gathered client-side and inserted into the crash database record.

 * Processed versions of files (i.e. retracer output) can be added subsequently.

 * Some files must be kept private (i.e. core dumps)

 * Traces from multiple crash reports are algorithmically compared to find exact-dupes and likely-dupes.

 * Crash reports can be grouped by package, by distro release, or by both.

 * Statistics are generated to show number of [exact|exact+likely] dupes for each type of crash. Statistics can be provided by package, by distro release, by date range, or a combination.

 * Bug report(s) can be associated with a given set of crashes.

 * The user should have some way to check back on the status of their crash report; e.g. have some report ID they can look at to see statistics and/or any associated bug #. E.g. provide a serial number at time of filing that they can load via a web page later on.

 * For X and kernel crashes (at least), these reports need to be indexable by hardware. That is, we want to be able to answer both "how prevalent are GPU hangs on Intel hardware?" and "on what hardware does this GPU hang appear?". Probably either DMI data or PCIIDs or both are needed for this.

Types of errors to handle:
 * Actual C-style crashes, with core.
 * Unhandled exceptions, such as you'd get from Python et al
 * Kernel oops and panics
 * Intel GPU dump output
 * dmesg & Xorg.0.log, triggered by GPU hangs

== Prior art ==

Windows Error Reporting is probably the most advanced crash reporting system. As described in “[[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.148.716&rep=rep1&type=pdf|Debugging in the (very) large: Ten years of implementation and experience]]” (PDF), it uses progressive data collection, where developers can request more than the “minidump” if necessary to understand particular problems. It also automatically notifies users if a software update fixes their problem. And hardware vendors can [[http://msdn.microsoft.com/en-us/windows/hardware/gg487440|see crash reports specific to their hardware]].

{{attachment:windows-app-progress.png}} {{attachment:windows-app.png}} {{attachment:windows-os.png}}

Mac OS X has a Crash``Reporter system that submits crash data to Apple.
Line 19: Line 275:
As a result, some Mac software developers have created their own crash tracking systems, such as [[http://unexpectedlyquit.com/|Adobe’s]] and [[http://www.flickr.com/photos/jfpoole/143205824/|Adium’s]].

Mozilla uses [[https://wiki.mozilla.org/Breakpad/Design|Breakpad]] to collect and submit minidumps on the client side, and [[https://wiki.mozilla.org/Socorro|Socorro]] to analyze and present them on the server side. Anyone can access crash data at [[https://crash-stats.mozilla.com/|crash-stats.mozilla.com]].

== Client design ==

||<tablestyle="width:100%"> ||'''The problem can be reported''' ||'''Your admin has blocked problem reporting'''||
||'''Part of the OS crashes'''||{{attachment:os-crash-reportable.png}}||{{attachment:os-crash-unreportable.png}} ||
||'''An application crashes'''||
||'''An application hangs''' ||
||'''Report is submitted''' ||
 As described in [[http://developer.apple.com/library/mac/technotes/tn2004/tn2123.html|Technical Note TN2123]], “There is currently no way for third party developers to access the reports submitted via Crash``Reporter”. As a result, some Mac software developers have created their own crash tracking systems, such as [[http://unexpectedlyquit.com/|Adobe’s]] and [[http://www.flickr.com/photos/jfpoole/143205824/|Adium’s]].

Mozilla uses [[https://wiki.mozilla.org/Breakpad/Design|Breakpad]] to collect and submit minidumps on the client side, and [[https://wiki.mozilla.org/Socorro|Socorro]] to analyze and present them on the server side. Anyone can access crash data at [[https://crash-stats.mozilla.com/|crash-stats.mozilla.com]]. Laura Thomson has written some [[http://blog.mozilla.com/webdev/author/lthomsonmozillacom/|blog]] [[http://blog.mozilla.com/webdev/2010/05/19/socorro-mozilla-crash-reports/|posts]] about it.

Android uses Google Feedback: http://android-developers.blogspot.com/2010/05/google-feedback-for-android.html

There is a google project for cross platform crashdump capturing.

There is a django project called 'sentry' for web server error analysis (that has a cassandra backend I'm told).

The Launchpad SOA has an active discussion around their requirements at https://dev.launchpad.net/LEP/OopsDisplay. They plan to split out various crash report tools from Launchpad into reusable python modules. It is unknown at this point if they'll be generic enough to fit Ubuntu's needs. Launchpad suspects this could fulfill needs for: Ubuntu One, Landscape, Canonical ISD (SSO etc.), Ubuntu; possibly also Drizzle, OpenERP, OpenStack.

[[/Debugging]]

Rationale

To help Ubuntu reach a standard of quality similar to competing operating systems, developers need to know the answers to two questions:

  1. How reliable is Ubuntu right now? (Compared with yesterday, compared with the previous version, or compared with what it would be if everyone had installed every update.)

  2. What’s the best thing I can do right now to help improve its quality?

We can better answer both of those questions if we collect all the information we need, for as many types of problems as we can, from a large representative sample of people. This means not requiring people to sign in to any Web site, enter any text, submit hundreds of megabytes of data, receive e-mail, or do anything more complicated than clicking a button. It means collecting problem reports both before and after release. And it means analyzing and bucketing problems automatically, with developers able to configure the system to automatically retrieve more information about a particular kind of problem when it next occurs.

Statistics collected by Microsoft show that a bug reported by their Windows Error Reporting system “is 4.5 to 5.1 times more likely to be fixed than a bug reported directly by a human”, that fixing the right 1 percent of bugs addresses 50 percent of customer issues, and that fixing 20 percent of bugs addresses 80 percent of customer issues.

The client interface for the error tracker also serves a purpose which is less important for developers, but more important for end users: explaining why something weird just happened. In previous Ubuntu release versions, when most programs crashed there was no explanation of why the window had disappeared.

Client design

Privacy settings

The “Security & Privacy” panel in Ubuntu 12.10 and later should contain a “Diagnostics” tab for error and metrics collection, expanding on the tab in Ubuntu 12.04.

settings-privacy-diagnostics.png

In Ubuntu 11.10 and earlier, a standalone “Privacy” window should be backported containing equivalent controls for just the error collection.

settings-privacy-old-versions.png

In both cases, the “People using this computer can…” and following controls should be insensitive whenever you have not unlocked them as an administrator.

In a new Ubuntu installation (or an upgrade to a version that introduces these settings), “Send error reports to Canonical” should be checked by default. But “Send a report automatically if a problem prevents login” and “Send occasional system information to Canonical”, when present, should be unchecked by default.

When there is an error

When there is an error that prevents login, and “Send a report automatically if a problem prevents login” is checked, the error should be sent automatically.

As soon as possible after any other type of error occurs, an alert should appear with text and buttons depending on the situation. The Esc and Enter keys should not do anything in these alerts, because you may have been just about to press one of those in the program that has the problem.

You are an admin, or error reporting is allowed

Your admin has blocked error reporting

Implemented in Ubuntu

An OS package crashes for the first time this version
Test case: sudo pkill -SEGV zeitgeist

12.04

An OS package crashes a subsequent time

12.04

“Ignore future problems of this type” means ignore future crashes of the same version of the same package.

An application thread crashes for the first time this version

(no alert shown)

(in 12.04, shows “closed unexpectedly” error instead?)

An application thread crashes a subsequent time

For most other error types, the alert shouldn’t offer to be silent next time — because it still needs to appear to explain what’s happened, and (in the application hang case) to let you stop/relaunch the application:

An application has a developer-specified error

An application hangs
Test case: eog & sleep 5 && pkill -STOP eog && sleep 20 && pkill -CONT eog

(targeted for 12.10)

An application crashes
Test case: eog & pkill -SEGV eog

12.04

Ubuntu restarts after a kernel oops

(targeted for 12.04 SRU)

A package fails to install or update

(targeted for 12.04 SRU)

A Debconf “string” prompt

(targeted for 12.10)

A Debconf “boolean” prompt

(targeted for 12.10)

A Debconf “select” prompt

(targeted for 12.10)

A Debconf “multiselect” prompt

(targeted for 12.10)

A Debconf “note” prompt

(targeted for 12.10)

A Debconf “text” prompt

(targeted for 12.10)

A Debconf “password” prompt

(targeted for 12.10)

But with non-application software crashing, we can’t tell programmatically whether it’s something you need to care about or not. So if you aren’t going to report the errors, we might as well let you ignore future errors:

Third-party non-application software crashes for the first time this version
Test case: sh -c 'kill -SEGV $$'

(no alert shown)

12.04

Third-party non-application software crashes a subsequent time

12.04

For all cases where the “Send an error report to help fix this problem” checkbox is present, its state should persist across errors and across Ubuntu sessions.

If you choose “Show Details”, it should change to “Hide Details” while a text field containing the error report appears below the primary text.

If necessary, a spinner and the text “Collecting information…” should appear centered inside the text field while the information is collected (other than the process name and version, which should appear instantly), pausing whenever the collection system is waiting for you to answer any questions.

If you choose to send an error report, the alert should disappear immediately. Data should be collected (if it hasn’t been already), and reports should be sent in the background, with no progress or success/failure feedback. If you are not connected to the Internet at the time, reports should be queued. Any queued reports should be sent when you next agree to send an error report while online.

If you are using a pre-release version of Ubuntu, and the error report matches an existing Launchpad bug report, a further alert box should appear explaining its status and letting you open the bug report.

bug-report.png
Enter = “OK”

(targeted for 12.10)

Future work: Ensure that if there is a delay in displaying a crash, we adjust the text of the dialog to reflect this. As an example, if X crashes and the user has to log in again or reboot the computer.

Future work: If a software update is known to fix the problem, replace the primary alert with the software update alert (or progress window, depending on the update policy), with customized primary text. Or point them at a web page (not a wiki page!) with details if a workaround exists, but no fix is available yet.

Future work: Automate the communication with the user to facilitate things like leak detection in subsequent runs, without requiring additional interaction with the user. Our current process requires us to ask people who are subscribed to the bug to try a specially-instrumented build, with a traditionally very long feedback loop between the developer and the bug subscribers. We should make it entirely automatic. Just wait for the next user who sees the bug to click one "yes, I'd like to help make this product better" button.

When there are multiple simultaneous errors

To guard against the case where multiple errors of the same type cause a flood of alert boxes, there should be aggregate alert boxes for the two most likely cases, internal errors and application crashes.

If an alert box for a single error is open and unfocused, when another error of the same type happens, that alert box should morph into the aggregate version.

Multiple OS packages crash

(?)

Multiple applications crash

(?)

In these cases, the “Show Details” box should show details of all the errors, with a separator between them.

Invitation for metrics collection

For any administrator, after the first time only that they respond to an error alert, a second alert should appear to invite them to opt in to metrics collection. (The “Esc” key should activate “Don’t Send” in this alert, but the “Enter” key should not do anything.)

privacy-settings-alert.png

The “Privacy…” button should open System Settings to the Privacy panel. Choosing “Send” should be equivalent to checking “Send occasional system information to Canonical” in the Privacy settings.

Client implementation

The apport client will write a .upload file alongside a .crash file to indicate that the crash should be sent to the crash database. A small C daemon (currently "whoopsie", previosuly "reporterd") will set up an inotify watch on the /var/crash directory, and any time one of these .upload files appears, it will upload the .crash file. It will do this if and only if there is an active Internet connection, as determined by watching the NetworkManager DBus API for connectivity events, otherwise it will add it to a queue for later processing.

We will ensure NetworkManager brings up the interfaces as early as possible, to enable us to file crash reports during boot.

This needs to be a daemon, rather than another path of the apport client code, to account for there not being an Internet connection at the time of the crash and for crashes during boot, when we cannot assume the user will get back to a known-good state to file the report.

The canonical example here is the scenario posed in Microsoft’s Windows Error Reporting paper, where a piece of malware was causing the core desktop application (explorer.exe) to crash. They were still able to receive crash reports, as their client software still submitted reports very early on in the boot process.

The apport crash file will be parsed into an intermediate data structure (currently a GHashTable), with the core dump stripped out, and then converted into BSON to be transmitted in a HTTP POST operation. The server will reply with a UUID for subsequent operations and, optionally, a command for further action. Initially, this will just be a command to upload the core dump.

A new field is being added to the apport crash file, StacktraceAddressSignature. The server will check for this field, and if it already has a retraced core dump generated from the same signature, it will reply with just the UUID of the crash report entry in the database, indicating that a core dump need not be submitted.

If, however, the server does reply with a request to upload the core dump, it will be sent as zlib compressed data in an HTTP POST operation.

The URLs for posting will be of the form:

Crash reports will be cleaned up after 14 days, as the system may never be connected to the Internet.

If the reporter daemon crashes, it will write a crash file like any other application. Its upstart job will have the respawn flag set, and a limit put in place so it doesn't go crazy.

If the reporter daemon moves to using apport-unpack to process the crash files, it should gracefully handle -ENOSPC.

Crash reports for applications not themselves part of packages in the Ubuntu will be handled. These will not be retraced, but they will be collected for statistical analysis. This removes the "the problem cannot be reported" dialog in Apport.

We will add an Origin and possibly a Site field to the apport reports, using the python-apt candidate.origins interface. This will allow us to answer questions like what percentage of crashes are coming from PPAs. More importantly, it will let us focus reports on packages from a particular PPA, like the unity-testing one.

Accessing previous reports

Choosing “Show Previous Reports” in the settings interface should open a Web page listing those reports.

previous-reports.png

To avoid end users getting lost in developer material, the page should have no global navigation.

To avoid privacy problems, it should be impossible to share the URL of the page. How?

Error reports should be listed in the order they were received, newest first, defaulting to the newest 50. The date received should link to the individual report.

If there are from 1 to 50 reports, the batch count should read only “Showing all {number}”, and there should be no batch navigation.

previous-reports-1-batch.png

If there are no reports at all, there should be no batch count, navigation, or table — just an explanatory sentence.

previous-reports-none.png

Server design

/ServerArchitecture has additional details.

We will use Robert Collins’ oops-repository as the foundation for our crash database. It has been suggested that this can meet Launchpad’s crash reporting requirements, scaling to a high volume of reports (e.g. 1M/day). We will also use the OOPS dictionary format for our crashes.

This will make integrating with Launchpad’s longer-term plans of this as a service for all projects an easier challenge. Launchpad’s offering may be implemented as one big Cassandra cluster in a multi-tenant fashion, or on a per-project basis, feeding to an API.

oops-repository will also provide the API for interacting with the database. This will include operations to post a new crash and potentially ask for more information, upload additional information (such as the core dump), get the full data for a crash out (a privileged operation), and update an existing crash report (a partially privileged operation) with the retraced data.

We will build a small Django web user interface for management functions on top of this API. The initial implementation will not allow regular developers to access the crash data, as we will not have time in this cycle to address the security concerns around this. Canonical IS will be the interim arbiter of who is able to access this system, inclusive of at least the release manager.

We will also evaluate Mozilla’s Socorro, to see if it requires less work to meet our longer-term needs, but this will be done as time allows.

Retracing

When a new core dump is submitted to the crash database, it will be written to a SAN and the UUID will be added to a RabbitMQ queue for the matching architecture. The queue will also be written in Cassandra, in the event the RabbitMQ service fails.

Retracing daemons for each architecture will pull UUIDs off their respective RabbitMQ queues, get the core dump for the UUID from Cassandra, then feed it through apport-retrace.

When a complete trace is generated, it will be added as a row in the crash column family for the relevant UUID. It will also be added to an index column family where the key is the crash signature (StacktraceAddressSignature) and the value is the UUID in the crash column family. In the future, we may expand this to a more complex bucketing algorithm, as necessary.

The retracing daemon systems will each keep a large cache of the debug symbol packages.

Future work

Upstart has inotify job support on its roadmap. If this is implemented, it may allow us to move from an always-running daemon to something spawned by upstart itself as-needed.

The system could be designed either with one single central server instance, to which all error collecting tools for all projects submit data, or could be distributed to separate server instances for each project. There are pros and cons to each approach and it's unclear which is best. Having multiple servers provides flexibility, which could be particularly important for private project use cases, and might make it easier to roll out project-specific customizations or configurations.

Eventually, retracing will be moved entirely into the crash database and provide as a web service for Launchpad to consume. This will remove the need for submitting core dumps to Launchpad at all.

Launchpad will be mined for bugs that share the same signature as crashes in the database. These will be linked into the crash. Once this is in place, oops-repository will be modified to provide an "update available that fixes this issue" response when the respective bug is closed by an upload.

We will investigate using Datastax's Brisk/Enterprise with Pig or Hive to query over existing crash reports.

Hardware information

Upon first successful connection to the Internet, the system will send a basic hardware profile, keyed on a SHA512 of the system UUID and a SHA512 of the DMI tables themselves.

This information will be submitted to one of the existing hardware databases. Queries will be possible across the crash database and hardware database. For example, it may be desireable to know what the top compiz crashes are for a particular piece of graphics hardware.

Constant measurement

We will follow the “if it moves, measure it” principle from Etsy, and will employ the Twisted port of their popular StatsD daemon for collecting metrics.

Some examples of data points we may want to capture:

  • - How long it takes to submit a crash? - How long does it take to retrace a crash? - The queue size of the retracer architecture pools.

    - The number of rows in each ColumnFamily.

As many Canonical projects are moving from Tuolumne to Graphite, we will follow suit and implement the graphing of these statistics on Graphite.

Performance testing

A variety of performance tests will be constructed to validate the architecture of this service. We will answer questions like, “how long does it take to bring up 400 large core dumps and map/reduce over them?”

We will optimize for latency. We will ask Canonical IS’ load testing expert to review this system.

General testing

We will have a complete set of unit tests for every part of this system, as well as system tests, using the Canonicloud to bring up test copies of the components.

We will maintain a staging server like Ubuntu One and Launchpad.

Deployment

Core dump reporting will not be enabled when the service is first deployed, to test the scalability of the overall system.

A fractional deployment strategy will be crafted, using a time-based, random, or machine fingerprint key to determine whether the reporting system should begin submitting crash reports.

Once the system is running effectively on a released version of Ubuntu, the client will be backported to the previous version of Ubuntu. If that undertaking is successful, it will then be backported to the previous LTS.

Developer client

The data will be reported on via the http://daisy.ubuntu.com backend server.

The developer client program views the data stored in the backend server, which package maintainers, upstream developers, and other interested technical folk can use to interact with the data. This should include:

  • Graphs
  • Tables
  • Detail views of particular error instances
  • Querying "Which crash reports are related to this bug?"
  • Statistics
    • "Top Changers" for spotting issues early
    • "Rate of crashes per user"

The backend provides a strong API for retrieving data of interest. This permits construction of adhoc queries, custom analysis, etc. beyond the client program's capabilities (such as automated scripting).

Requirements

  • Must be attentive to privacy issues [Need to elaborate further on this]
  • A collection of files are gathered client-side and inserted into the crash database record.
  • Processed versions of files (i.e. retracer output) can be added subsequently.
  • Some files must be kept private (i.e. core dumps)
  • Traces from multiple crash reports are algorithmically compared to find exact-dupes and likely-dupes.
  • Crash reports can be grouped by package, by distro release, or by both.
  • Statistics are generated to show number of [exact|exact+likely] dupes for each type of crash. Statistics can be provided by package, by distro release, by date range, or a combination.
  • Bug report(s) can be associated with a given set of crashes.
  • The user should have some way to check back on the status of their crash report; e.g. have some report ID they can look at to see statistics and/or any associated bug #. E.g. provide a serial number at time of filing that they can load via a web page later on.
  • For X and kernel crashes (at least), these reports need to be indexable by hardware. That is, we want to be able to answer both "how prevalent are GPU hangs on Intel hardware?" and "on what hardware does this GPU hang appear?". Probably either DMI data or PCIIDs or both are needed for this.

Types of errors to handle:

  • Actual C-style crashes, with core.
  • Unhandled exceptions, such as you'd get from Python et al
  • Kernel oops and panics
  • Intel GPU dump output
  • dmesg & Xorg.0.log, triggered by GPU hangs

Prior art

Windows Error Reporting is probably the most advanced crash reporting system. As described in “Debugging in the (very) large: Ten years of implementation and experience” (PDF), it uses progressive data collection, where developers can request more than the “minidump” if necessary to understand particular problems. It also automatically notifies users if a software update fixes their problem. And hardware vendors can see crash reports specific to their hardware.

windows-app-progress.png windows-app.png windows-os.png

Mac OS X has a CrashReporter system that submits crash data to Apple.

mac-app.png mac-plugin.png mac-os.png mac-hang.png

  • As described in Technical Note TN2123, “There is currently no way for third party developers to access the reports submitted via CrashReporter”. As a result, some Mac software developers have created their own crash tracking systems, such as Adobe’s and Adium’s.

Mozilla uses Breakpad to collect and submit minidumps on the client side, and Socorro to analyze and present them on the server side. Anyone can access crash data at crash-stats.mozilla.com. Laura Thomson has written some blog posts about it.

Android uses Google Feedback: http://android-developers.blogspot.com/2010/05/google-feedback-for-android.html

There is a google project for cross platform crashdump capturing.

There is a django project called 'sentry' for web server error analysis (that has a cassandra backend I'm told).

The Launchpad SOA has an active discussion around their requirements at https://dev.launchpad.net/LEP/OopsDisplay. They plan to split out various crash report tools from Launchpad into reusable python modules. It is unknown at this point if they'll be generic enough to fit Ubuntu's needs. Launchpad suspects this could fulfill needs for: Ubuntu One, Landscape, Canonical ISD (SSO etc.), Ubuntu; possibly also Drizzle, OpenERP, OpenStack.

/Debugging

ErrorTracker (last edited 2022-11-09 21:58:30 by seth-arnold)