- A stacktrace address signature is a signature generated from the addresses of a stack trace (bzr:apport/apport/report.py:1215). Because this isn't as complete as a full crash signature, generated by retracing the core dump with all the needed debug symbols installed, we will have multiple crash signatures for the same stack trace. We can then request a full core dump if and only if we haven't already received a core dump for the submitted stacktrace address signature. This saves the vast majority of users from having to upload a massive file.
- A regular crash signature is simply the concatenation of the executable path, the signal number, and the top few lines of the stack trace (bzr:apport/apport/report.py:1128).
'Wed Feb 29 22:56:12 2012'
'Wed Feb 29 22:56:12 2012'
#0 0x0f in foo () at /foo.c:1 ...
The UserOOPS CF is for an eventual feature whereby you can remove your data from the crash database, given a pairing of your system UUID and a UUID generated and stored in /etc/default/whoopsie.
The DayOOPS CF comes from lp:oops-repository, but is yet to be used here.
A typical run through of the diagram may occur as follows:
- A crash comes in and is determined to be either a binary or Python code (server acceptance).
If it is a binary, a lookup is done with the stacktrace address signature as a column name to the crash_signature_for_stacktrace_address_signature row in the Indexes ColumnFamily. If this signature has already been retraced with a previous crash, the Stacktrace CF will contain the fully retraced report. We then add the OOPS ID to the bucket queue.
If the above lookup returns NULL, it is assumed that we don't have a retraced report for this signature, and a full core dump is requested. If the Indexes CF 'retracing' column key has the signature, then we know that we have this core dump already and are just waiting for it to be retraced. We add the OOPS ID to the AwaitingRetrace CF in both cases, as we may not get a core dump from this user.
When the reporting daemon (whoopsie) sends the full core dump on behalf of the user, it's pushed onto the respective architecture queue, then eventually processed. It does this by first pulling the rest of the crash data back down from the OOPS ColumnFamily, then running the now fully formed report through apport-retrace. The fully retraced report is then pushed into the Stacktrace ColumnFamily, row keyed on the stacktrace address signature.
The newly generated crash signature is also written to the Indexes CF, as the column value for the stacktrace address signature column name.
The stacktrace address signature is removed from the Indexes CF's retrace row, indicating that we're no longer waiting for this signature to be retraced. Subsequent reports with this signature can be written straight to the bucket queue.
Finally, the OOPS ID (uuid1) for this crash is pushed onto the bucket queue. All the OOPS IDs with the same stacktrace address signature are also pulled from the AwaitingRetrace CF and pushed onto the bucket queue.
If retracing fails, then the crash address signature is removed from the retrace row key in the Indexes CF, indicating that we need another core file to process. This is one example of where we can use ConsistencyLevel.ONE. We don't care if the code accepting crashes is made aware of needing to accept another core file straight away.
Any crashes that come through while we have the same signature in the Indexes CF's retrace row will be written to the AwaitingRetrace CF, with the stacktrace address signature as the row key, and the OOPS ID as the column name (with a NULL value).
- The bucket queue is processed by what is currently a small Python program to take the report, generate a full crash signature, and add the OOPS ID of that report to the respective bucket.
Obviously, a single issue may have multiple stacktraces, and likewise, many issues may share a single stacktrace. Therefore, this bucketing algorithm will grow more comprehensive with time, including other fields which will be concatenated onto the row key as needed (or set to empty if they no longer serve as a differentiating factor).
If we didn't use a separate bucketing step, even for the relatively simple Python crashes, I fear we'd spend too much time in the crash acceptance code, reassembling the report from the BSON data to generate a crash signature.
As each bucket is modified, the counter column values in the DayBucketCounts CF are incremented. This CF is then consumed by the report web interface.