KernelBisection

Revision 12 as of 2012-10-17 17:51:15

Clear message

Before Starting to Commit Bisect Your Kernel Regression

If you were asked to bisect your kernel due to a regression, were referred to this article, and you are willing to do so, thank you for your efforts! You are taking the best route to get your bug resolved as soon as possible.

Some Backgorund Info

When you perform a bisect, you want to determine which patch is causing a regression. Bisecting is the most efficient way to determine the offending patch and is just a process of elimination. For example, lets say you have 5 patches. You know that after applying the first patch, everything is working as you've thoroughly tested. But then you go ahead and apply patch 2, 3, 4, and 5 and then decide to test. Unfortunately you discover you've broken something along the way. Rather than going back and applying patches one by one and testing, you can perform a bisect. You start by choosing a point in the middle and test. Say we choose patch 3. So we apply patch 1, 2, and 3 and test. We find that everything is still working. That's great as that means we've eliminated patch 1, 2, and 3 as being the culprits and we can focus on patches 4 and 5. We now know everything is working through patch 3, but still fails as of patch 5. We next pick patch 4, which is in the middle and test again. We find out that indeed our testing fails. We know we introduced the regression in patch 4. We can try reverting patch4 and confirm our suspicions. That is the basic process of a kernel bisection.

The way to minimize the gap between problematic kernels, and minimize the time spent commit bisecting, is to narrow your regression to a specific kernel version. A bisect can be performed against differant trees. For an issue with the current development kernel, the bisect most likely will be performed against Linus' tree: git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git . However, for stable kernels, a bisect is usually performed against the linux-stable tree: git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git . In some cases, you may need to perform a bisect against an Ubuntu tree, such as ubuntu-quantal.

Upstream Kernel Bisect

All of the upstream kernels are published at: http://kernel.ubuntu.com/~kernel-ppa/mainline/ . The first step in the bisect process it to find the last "Good" kernel and the first "Bad" kernel. That is done by downloading, installing and testing kernels from this page.

Ubuntu Kernel Bisect

All of the Ubuntu linux kernels are avaiable for download from: https://launchpad.net/ubuntu/+source/linux . For example, let us pretend you had a regression within the Precise kernel. The published kernels in Precise may be found at https://launchpad.net/ubuntu/precise/+source/linux .

As you will notice under the "Releases in Ubuntu" section, a series of kernels are listed vertically:

3.0.0-12.20

3.1.0-1.1

3.1.0-2.2

3.1.0-2.3

3.2.0-1.1

...

Similar to testing the upstream kernel, you want to identify the last "Good" kernel and first "Bad" kernel.

Instructions on installing these kernels may be found at https://wiki.ubuntu.com/Kernel/MainlineBuilds . Once this is done, please continue reading the information below on bisecting kernel commits within a given release.

How to bisect a sequence of commits to find the bad one

The problem

You have made a release, and something broke. There are hundreds of patches committed since the previous tested release. How do you identify the bad one?

Required knowledge and tools

The rest of this page assumes that you know how to fetch a kernel from the Ubuntu git repository, and build it, and that you have basic git skills. If you can't do that yet, try starting with this wiki page.

This example

The commands in the example on this page use a real life example. In January of 2011, a kernel which was published to the -proposed pocket caused Radeon graphics to break for a number of users. Typing the commands as shown on this page will recreate the steps taken to find the bad commit in that release. The entire history of testing the bisected kernels for that regression appears in the bug

What is bisection?

It's a successive splitting of a series of commits in order to locate the single one that caused a failure.

For more information, see the git help

git bisect --help

Getting set up

You need to have a bug reproducer, or have a cooperative tester in the community. If you can't reliably determine whether the bug exists in a given kernel, bisection will not give meaningful results.

This process goes a lot faster if you can quickly build kernels and quickly have them tested. Using a fast build machine and having good communications with the testers will speed things up.

Check out your tree and get ready

If you want to follow along with the example, use the commands exactly as shown

git clone git://kernel.ubuntu.com/ubuntu/ubuntu-maverick.git
cd ubuntu-maverick
git checkout -b mybisect origin/master

This creates a local copy of the maverick repository, and then creates a local branch named mybisect for your tests.

Full list of git repos.

Take a look first to see what you can learn

The version which works is tagged Ubuntu-2.6.35-24.42. The version which has the problem is tagged Ubuntu-2.6.35-25.43

First, lets take a quick look at the changes between the two:

git log --oneline Ubuntu-2.6.35-24.42..Ubuntu-2.6.35-25.43

Now, how many commits are in there?

git log --oneline Ubuntu-2.6.35-24.42..Ubuntu-2.6.35-25.43 | wc

It says 325, but two of those are the startnewrelease and final changelog changes, so there are 323 commits, and the bad one is among them.

Sometimes you can easily find the problem if it's in a subsystem that only has changes from a few patches. In this example, it's Radeon hardware that is affected, so try looking at the commits to the radeon driver:

git log --oneline Ubuntu-2.6.35-24.42..Ubuntu-2.6.35-25.43 drivers/gpu/drm/radeon/

That still shows eleven commits. Reverting each of those and testing will take longer than bisecting the entire set of changes, so we'll go ahead and do the bisection.

Determine the known good and known bad commits

In the Maverick case, these are the release tags, which are: Ubuntu-2.6.35-25.43 - good Ubuntu-2.6.35-24.42 - bad

Start the bisection

start a bisection by using the command "git bisect start <bad> <good>"

git bisect start Ubuntu-2.6.35-25.43 Ubuntu-2.6.35-24.42

which results in this:

Bisecting: 162 revisions left to test after this (roughly 7 steps)
[dae1e6305dba4ff1e8574b3b6eb42613d409b460] olpc_battery: Fix endian neutral breakage for s16 values

This tells you that git has chosen the commit "olpc_battery: . . ." as the midpoint for the first bisection, and reset your tree so that is the top commit. Git is also telling you that there are about seven bisection steps left.

Give this test a version number

Before you build this kernel for testing, you have to give it a version number. This is done by editing the debian.master/changelog file.

The top of that file now appears like this:

linux (2.6.35-25.43) UNRELEASED; urgency=low

  CHANGELOG: Do not edit directly. Autogenerated at release.
  CHANGELOG: Use the printchanges target to see the curent changes.
  CHANGELOG: Use the insertchanges target to create the final log.

 --  Tim Gardner <tim.gardner@canonical.com>  Mon, 06 Dec 2010 10:45:38 -0700

The top line of that file has the version in it. Choose a version that is:

  • clearly a test
  • will be superceded by later kernels
  • has meaning to you in your bisection testing

I use my initials, plus an incrementing number, plus an indicator of the launchpad bug associated with the problem - thus, my first test version is:

2.6.35-25.44~spc01LP703553

The '~' is a special versioning trick that means that this kernel will be superceded and replaced by any version higher than 2.6.35-25.44, yet this version is considered higher than .44 - using this versioning makes sure that if a user tests our kernel they won't keep it around after the next update comes along.

You also need to change the UNRELEASED to the maverick pocket, or it will not be accepted for your PPA build.

Edit the changelog and replace the entire text in the earlier box with this:

linux (2.6.35-25.44~spc01LP703553) maverick; urgency=low

  Test build for bisection of a Radeon regression

 --  Steve Conklin <sconklin@canonical.com>  Mon, 24 Jan 2011 22:45:38 -0600

Do not commit the change you just made to the changelog into your local git repo. There's no need and it makes it harder to build subsequent tests.

Now build the kernel. You can use a PPA, but it will probably take a lot longer to build.

Getting test results

Place the kernel package where your testers can get to it. let them know it's there. The Launchpad bug is a good place to track all of your testing. You can review the bug used for the example again.

Using the test results

when you have the test results, you run git bisect again and say whether the test was good or bad. In this example case, the first test was bad, so we do the following:

git bisect bad

And git responds with:

Bisecting: 80 revisions left to test after this (roughly 6 steps)
[1829af44f4fe8600d6c9cde5fcb7a1345b201eaf] r6040: Fix multicast filter some more

Now edit the changelog with a new version and build the next test.

Repeat until the bad commit is eventually identified.

At any time, you can use the command

git bisect log

to review all the work that's taken place.

Bisecting the Upstream kernel

The previous example talked about bisecting an Ubuntu Linux kernel. Now you're wondering how you'd go about bisecting the upstream kernel and building and upstream kernel. This is where you can make use of the mainline build scripts, which are availbe from the kteam-tools repository http://kernel.ubuntu.com/git/ubuntu/kteam-tools.git . As an example, let's say testing of the mainline kernel has show the regression was introduced somewhere between v3.2-rc1 and v3.2-rc2. Here are the steps you can perform to bisect and build a test kernel:

Login to a machine that you've configured to build kernels and setup environment

Clone the appropiate tree(Linus' tree for development kernels or the stable-tree for stable kernels):

git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux-stable

Clone the kteam-tools repository:

git clone zinc.canonical.com:/srv/kernel.ubuntu.com/git/jsalisbury/kteam-tools kteam-tools

Change into the kernel tree directory:

cd linux-stable

Add a remote repository to the Ubuntu release you are building you kernel for. This allows you to get all the debian specific bits(debian.master for example) In this example it will be precise.

git remote add precise git://kernel.ubuntu.com/ubuntu/ubuntu-precise.git

Start Upstream Bisect

git bisect start
git bisect good v3.2-rc1
git bisect bad v3.2-rc2

The bisect will then print out something like the following:

Bisecting: 161 revisions left to test after this (roughly 7 steps)
[fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b] Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound

Basically what that is telling you is that commit fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b is approximately in the middle of v3.2-rc1 and v3.2-rc2 and is a good candidate for testing. You now want to build a kernel up through commit fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b

To do this, you can use the mainline-build-one script which can be found at ~kteam-tools/malinline-build/maineline-build-one .

Build Upstream Test Kernel

The next step is to run the mainline-build-one script. This script will build an upstream kernel that will be able to install and run on an Ubuntu system. Run the mainline-build-one script as follows (Assumimg you've added kteam-tools/mainline-build to your PATH):

mainline-build-one fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b precise

This will generate a bunch of .debs one directory level above. One of the debs will be something like:

linux-image-3.2.0-0302rc1gfe10e6f-generic_3.2.0-0302rc1gfe10e6f.201112010256_amd64.deb

This is the deb you will want to test and see if the bug exists or not.

Update Bisect With Test Results

Depending on your test results, you'll mark this commit as "good" or "bad", ie:

cd linux-stable

If testing was good (ie no issues) do the following:

git bisect good fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b

Otherwise if the testing was bad, you would do the follwoing:

git bisect bad fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b

That'll then spit out the next commit to test. Eventually you'll narrow it down and the bisect will tell you which was the first bad commit.

Once the first bad commit is identified, ou can then try reverting that one commit and see if that fixes the bug.

Notes

Bisecting across non-linear tags is usually a bad idea. The following will tell you whether or not two given tags are non-linear

git rev-list <newer-tag> | \
grep $(git log --pretty=oneline -1 <older-tag> | cut -d' ' -f1)

If that command outputs a sha1 then the tags are linear, otherwise they are not.

Shortcut: If you can determine a set of commits that are known good and known bad within the larger range, you can reduce the number of iterations required. (explain when this might make sense)

The output of the command "git bisect log" can be saved and later run as a shell script to return you to exactly where you were. So if you have to use your repo for something else while you are waiting for test results, you can recover your last state.