KernelBisection

Revision 47 as of 2014-01-04 14:38:07

Clear message

If you were asked to bisect your kernel due to a kernel regression, were referred to this article, and you are willing to do so, thank you for your efforts! You are taking the best route to get your bug resolved as soon as possible.

What is a bisect?

A forward bisection, or just traditionally called a bisect, is the fastest process of finding the midpoint between a known good software version release, and a known bad one released afterwards. One continues finding the successive midpoint until one identifies the last good software version release, followed consecutively by the first bad one. Ideally, if one could determine a set of commits that are known good and known bad within a larger range, one could reduce the number of bisect iterations required. However, a bisect would be handy when this wouldn't be possible.

Performing a bisect is faster than testing every version in between the initial known good version, and the known bad one. For example, if your known good release was 1.0, and bad was 1.10, the worst case scenario would be testing 9 releases (1.1, 1.2, 1.3,..., 1.9) until 1.9 was found to be the last known good version. However, bisecting the worst case scenario, one would test only 4 releases (1.5, 1.7, 1.8, and 1.9).

What is a reverse bisect?

A reverse bisect is the process of finding the midpoint between a known bad software version release, and a known good one released afterwards. One continues finding the successive midpoint until one identifies the last bad software version release, followed consecutively in version by the first good one.

How do I bisect a Ubuntu kernel bug?

For example, let us say you started with a fully updated Maverick 32-bit (32-bit is also known as i386), install. Then, instead of upgrading you just did a clean install of Quantal, and found what you think may be a linux kernel bug, as well as regression. However, if you are unsure of what release this regression occurred with, in order to rule out a userspace issue, and to identify the specific regression, the next step would be bisecting Ubuntu releases.

Bisecting Ubuntu releases

Hence, one typically wants to narrow down the first Ubuntu release after Maverick this problem began in. So, we have the following releases:

Ubuntu 13.04 Raring Ringtail
Ubuntu 12.10 Quantal Quetzal
Ubuntu 12.04 Precise Pangolin
Ubuntu 11.10 Oneiric Ocelot
Ubuntu 11.04 Natty Narwhal
Ubuntu 10.10 Maverick Meerkat

The midpoint release between Maverick and Quantal is Ubuntu 11.10 Oneiric Ocelot. One may download releases from http://releases.ubuntu.com/. If the bug is reproducible in Oneiric, then one would want to test Ubuntu 11.04 Natty Narwhal. If this is reproducible in Natty, then one knows that the regression happened going from Maverick to Natty. The next step is bisecting Ubuntu kernel versions.

Bisecting Ubuntu kernel versions

Continuing the prior example, the next step would be to find the last good kernel version, followed consecutively in version by the first bad one. So, assuming the Maverick kernel was kept updated, as per https://launchpad.net/ubuntu/maverick/+source/linux , the last Maverick kernel version published for upgrade was 2.6.35-32.67. As one may notice under the Releases in Ubuntu section, a series of kernels are listed vertically:

2.6.35-32.67
2.6.35-32.68
2.6.38-10.44


Info <!> Please note one may utilize the same link with a different release name to bisect other Ubuntu kernels:
https://launchpad.net/ubuntu/lucid/+source/linux
https://launchpad.net/ubuntu/maverick/+source/linux
https://launchpad.net/ubuntu/natty/+source/linux
https://launchpad.net/ubuntu/oneiric/+source/linux
https://launchpad.net/ubuntu/precise/+source/linux
https://launchpad.net/ubuntu/quantal/+source/linux
https://launchpad.net/ubuntu/raring/+source/linux
https://launchpad.net/ubuntu/saucy/+source/linux
https://launchpad.net/ubuntu/trusty/+source/linux

Next, consulting https://launchpad.net/ubuntu/natty/+source/linux, and assuming the kernel bug in Natty was found immediately upon install, the first Natty kernel version published for release was 2.6.38-8.42. Since we know 2.6.38-8.42 came before 2.6.38-10.44, the next step would be to install kernel 2.6.35-32.68 (the midpoint release between 2.6.35-32.67 and 2.6.38-8.42). Consulting https://launchpad.net/ubuntu/+source/linux/2.6.35-32.68, the Ubuntu 2.6.35-32.68 kernel files are found clicking i386 found under Builds. Once at the new page https://launchpad.net/~canonical-kernel-team/+archive/ppa/+build/3322948, one will see under:

Built files
Files resulting from this build:

the following files to install:

linux-headers-2.6.35-32-generic-pae_2.6.35-32.68_i386.deb (793.3 KiB)
linux-headers-2.6.35-32_2.6.35-32.68_all.deb (9.9 MiB)
linux-image-2.6.35-32-generic-pae_2.6.35-32.68_i386.deb (32.5 MiB)

Instructions on installing kernels may be found at https://wiki.ubuntu.com/Kernel/MainlineBuilds#Installing_Mainline_Kernels . If the bug is reproducible, one now knows that the Ubuntu kernel bug was introduced between 2.6.35-32.67 and 2.6.35-32.68. The next step is commit bisecting Ubuntu kernel versions.

Commit bisecting Ubuntu kernel versions

Required knowledge and tools

The rest of this page assumes that you know how to fetch a kernel from the Ubuntu git repository, and build it, and that you have basic git skills. If you can't do that yet, try starting with this wiki page. As well, one may want to familiarize themselves with git bisect via a terminal:

git bisect --help

This example

The commands in the example on this page use a real life example. In January of 2011, a kernel which was published to the -proposed pocket caused Radeon graphics to break for a number of users. Typing the commands as shown on this page will recreate the steps taken to find the bad commit in that release. The entire history of testing the bisected kernels for that regression appears in the bug.

Getting set up

You need to have a bug reproducer, or have a cooperative tester in the community. If you can't reliably determine whether the bug exists in a given kernel, bisection will not give meaningful results.

This process goes a lot faster if you can quickly build kernels and have them tested. Using a fast build machine and having good communications with the testers will speed things up.

Check out your tree and get ready

If you want to follow along with the example, use the commands exactly as shown:

git clone git://kernel.ubuntu.com/ubuntu/ubuntu-maverick.git
cd ubuntu-maverick
git checkout -b mybisect origin/master

This creates a local copy of the maverick repository, and then creates a local branch named mybisect for your tests.

Full list of git repos.

Take a look first to see what you can learn

The version which works is tagged Ubuntu-2.6.35-24.42. The version which has the problem is tagged Ubuntu-2.6.35-25.43

First, lets take a quick look at the changes between the two:

git log --oneline Ubuntu-2.6.35-24.42..Ubuntu-2.6.35-25.43

Now, how many commits are in there?

git log --oneline Ubuntu-2.6.35-24.42..Ubuntu-2.6.35-25.43 | wc

It says 325, but two of those are the startnewrelease and final changelog changes, so there are 323 commits, and the bad one is among them.

Sometimes you can easily find the problem if it's in a subsystem that only has changes from a few patches. In this example, it's Radeon hardware that is affected, so try looking at the commits to the radeon driver:

git log --oneline Ubuntu-2.6.35-24.42..Ubuntu-2.6.35-25.43 drivers/gpu/drm/radeon/

That still shows eleven commits. Reverting each of those and testing will take longer than bisecting the entire set of changes, so we'll go ahead and do the bisection.

Determine the known good and known bad commits

In the Maverick case we have:
Ubuntu-2.6.35-25.43 - <BAD>
Ubuntu-2.6.35-24.42 - <GOOD>

Start the bisection

Start a bisection by using the command git bisect start <BAD> <GOOD>:

git bisect start Ubuntu-2.6.35-25.43 Ubuntu-2.6.35-24.42

which results in this:

Bisecting: 162 revisions left to test after this (roughly 7 steps)
[dae1e6305dba4ff1e8574b3b6eb42613d409b460] olpc_battery: Fix endian neutral breakage for s16 values

This tells you that git has chosen the commit "olpc_battery: . . ." as the midpoint for the first bisection, and reset your tree so that it is the top commit. Git is also telling you that there are about seven bisection steps left.

Give this test a version number

Before you build this kernel for testing, you have to give it a version number. This is done by editing the debian.master/changelog file.

The top of that file now appears like this:

linux (2.6.35-25.43) UNRELEASED; urgency=low

  CHANGELOG: Do not edit directly. Autogenerated at release.
  CHANGELOG: Use the printchanges target to see the curent changes.
  CHANGELOG: Use the insertchanges target to create the final log.

 --  Tim Gardner <tim.gardner@canonical.com>  Mon, 06 Dec 2010 10:45:38 -0700

The top line of that file has the version in it. Choose a version that is:

  • clearly a test
  • will be superceded by later kernels
  • has meaning to you in your bisection testing

I use my initials, plus an incrementing number, plus an indicator of the launchpad bug associated with the problem - thus, my first test version is:

2.6.35-25.44~spc01LP703553

The '~' is a special versioning trick that means that this kernel will be superceded and replaced by any version higher than 2.6.35-25.44, yet this version is considered higher than .44 - using this versioning makes sure that if a user tests our kernel they won't keep it around after the next update comes along.

You also need to change the UNRELEASED to the maverick pocket, or it will not be accepted for your PPA build.

Edit the changelog and replace the entire text in the earlier box with this:

linux (2.6.35-25.44~spc01LP703553) maverick; urgency=low

  Test build for bisection of a Radeon regression

 --  Steve Conklin <sconklin@canonical.com>  Mon, 24 Jan 2011 22:45:38 -0600

Do not commit the change you just made to the changelog into your local git repo. There's no need and it makes it harder to build subsequent tests.

Now build the kernel. You can use a PPA, but it will probably take a lot longer to build.

Getting test results

Place the kernel package where your testers can get to it. Let them know it's there. The Launchpad bug is a good place to track all of your testing. You can review the bug used for the example again.

Using the test results

when you have the test results, you run git bisect again and say whether the test was good or bad. In this example case, the first test was bad, so we do the following:

git bisect bad

And git responds with:

Bisecting: 80 revisions left to test after this (roughly 6 steps)
[1829af44f4fe8600d6c9cde5fcb7a1345b201eaf] r6040: Fix multicast filter some more

Now edit the changelog with a new version and build the next test. Repeat until the bad commit is eventually identified.

At any time, you can use the command:

git bisect log

to review all the work that's taken place.

FAQ

Map Ubuntu kernel to Mainline kernel for mainline bisection

Info <!> If you have either of the two issues below, assuming the issue is not due to a downstream patch or configuration change, one would want to switch from commit bisecting the Ubuntu kernel to commit bisecting the mainline kernel. As the Ubuntu kernel and mainline kernel have differing version schemes, one would want to use the Ubuntu to Mainline kernel version mapping page. With this is mind, it may not map to an upstream tag one could use directly for bisection. For example, if one is using Ubuntu kernel 3.10.0-6.17, which maps to mainline 3.10.3, when one tries to bisect against this tag, one would get: fatal: Needed a single revision Bad rev input: v3.10.3

Hence, one could simply just use an adjacent tag that is valid v3.10-rc7.

Bisecting: a merge base must be tested

If one performs at a terminal:

git bisect start Ubuntu-2.6.38-8.40 Ubuntu-2.6.38-7.39
Bisecting: a merge base must be tested
[521cb40b0c44418a4fd36dc633f575813d59a43d] Linux 2.6.38

git is advising that in order to proceed with the bisect, one would need to tell git if the commit 521cb40b0c44418a4fd36dc633f575813d59a43d is good or bad via:

git bisect good

or:

git bisect bad

Commit bisecting Ubuntu kernel versions across non-linear tags

The following will tell you whether or not two given tags are non-linear:

git rev-list <newer-tag> | \
grep $(git log --pretty=oneline -1 <older-tag> | cut -d' ' -f1)

If that command outputs a sha1 then the tags are linear, otherwise they are not. If they are not, this will cause the below mentioned folders to disappear. Assuming the issue is not due to a downstream patch or configuration change, one would want to switch from commit bisecting the Ubuntu kernel to commit bisecting the mainline kernel following the instructions below. You can use the Ubuntu to Mainline kernel version mapping page to ease this transition.

Why did the folders debian and debian.master disappear?

For example, while attempting to commit bisect the Ubuntu kernel for Precise, if one performed the following:

git clone git://kernel.ubuntu.com/ubuntu/ubuntu-precise.git && cd ubuntu-precise && git checkout -b mybisect origin/master && git log --oneline Ubuntu-3.2.0-14.23..Ubuntu-3.2.0-15.24 | wc && git bisect start Ubuntu-3.2.0-15.24 Ubuntu-3.2.0-14.23

one will notice the debian and debian.master folders disappeared. This is a result of bisecting non-linear commits.

Commit bisecting the Ubuntu development release kernel

For an issue with the current development kernel, the bisect most likely will be performed against Linus' tree: git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git . However, for stable kernels, a bisect is usually performed against the linux-stable tree: git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git . In some cases, you may need to perform a bisect against an Ubuntu tree, such as ubuntu-quantal.

How do I bisect the upstream kernel?

Bisecting upstream kernel versions

All of the upstream kernels are published at http://kernel.ubuntu.com/~kernel-ppa/mainline/ . The first step in the bisect process is to find the last "Good" kernel version, followed consecutively in version by the first "Bad" one. That is done by downloading, installing and testing kernels from here. Once this is done, the next step is commit bisecting upstream kernel versions.

Commit bisecting upstream kernel versions

First, follow the KernelTeam/GitKernelBuild guide to build a new kernel from git. The step you will be doing the most is #9. "--append-to-version=-custom" is very important to change to help differeniate your kernels.

As an example, let's say testing of the mainline kernel has shown the regression was introduced somewhere between v3.2-rc1 and v3.2-rc2.

Confirmation of Mainline Test Results

It's not required, but if you are new to building a kernel I suggest confirming your results by building both of the mainline builds you narrowed it down to yourself. In our example this is v3.2-rc1 (good) and v3.2-rc2 (bad).

For v3.2-rc1 you would test this by running:

git checkout v3.2-rc1

Now run the build, install the kernel, and test your issue.

Start the bisect

git bisect start
git bisect good v3.2-rc1
git bisect bad v3.2-rc2

The bisect will then print out something like:

Bisecting: 161 revisions left to test after this (roughly 7 steps)
[fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b] Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound

Basically what that is telling you is that commit fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b is approximately in the middle of v3.2-rc1 and v3.2-rc2 and is a good candidate for testing.

Now run the build, install the kernel, and test your issue.

Does your issue still occur?

If testing was good (i.e. no issues) do the following:

git bisect good

Otherwise if the testing was bad, you would do the following:

git bisect bad

Repeat until done

Repeat the process: build, install, test, and report back test results with git bisect good or bad.

You will know when it is done because it will display a message starting with: 7fd2ae21a42d178982679b86086661292b4afe4a is the first bad commit

Please attach that entire message and the output of git bisect log (as a file) to your bug report.

Bisecting upstream kernel versions to single commit using mainline-build-one

If you will be doing upstream testing more often this may be more convenient. You need to set up your system first, which is not decribed here.

The previous section talked about bisecting a Ubuntu Linux kernel. Now you may be wondering how to go about bisecting the upstream kernel and building an upstream kernel. This is where you can make use of the mainline build scripts, which are available from the kteam-tools repository http://kernel.ubuntu.com/git/ubuntu/kteam-tools.git. As an example, let's say testing of the mainline kernel has show the regression was introduced somewhere between v3.2-rc1 and v3.2-rc2. The next section will show you the steps to perform a bisect and build a test kernel.

Login to a machine that you've configured to build kernels and setup environment

Clone the appropriate tree (Linus' tree for development kernels or the stable-tree for stable kernels):

git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git linux-stable

Clone the kteam-tools repository:

git clone git://kernel.ubuntu.com/ubuntu/kteam-tools.git kteam-tools

Change into the kernel tree directory:

cd linux-stable

Add a remote repository to the Ubuntu release you are building your kernel for. This allows you to get all the debian specific bits (debian.master for example). In this example it will be precise.

git remote add precise git://kernel.ubuntu.com/ubuntu/ubuntu-precise.git

Start upstream bisect

git bisect start
git bisect good v3.2-rc1
git bisect bad v3.2-rc2

The bisect will then print out something like:

Bisecting: 161 revisions left to test after this (roughly 7 steps)
[fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b] Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound

Basically what that is telling you is that commit fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b is approximately in the middle of v3.2-rc1 and v3.2-rc2 and is a good candidate for testing. You now want to build a kernel up through commit fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b.

To do this, you can use the mainline-build-one script which can be found at ~kteam-tools/malinline-build/maineline-build-one .

Build Upstream Test Kernel

The next step is to run the mainline-build-one script. This script will build an upstream kernel that will be able to install and run on a Ubuntu system. Run the mainline-build-one script as follows (assuming you've added kteam-tools/mainline-build to your PATH):

mainline-build-one fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b precise

This will generate a bunch of .debs one directory level above. One of the debs will be something like:

linux-image-3.2.0-0302rc1gfe10e6f-generic_3.2.0-0302rc1gfe10e6f.201112010256_amd64.deb

This is the deb you will want to test and see if the bug exists or not.

Update Bisect With Test Results

Depending on your test results, you'll mark this commit as "good" or "bad":

cd linux-stable

If testing was good (i.e. no issues) do the following:

git bisect good fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b

Otherwise if the testing was bad, you would do the following:

git bisect bad fe10e6f4b24ef8ca12cb4d2368deb4861ab1861b

That'll then spit out the next commit to test. Eventually you'll narrow it down and the bisect will tell you which was the first bad commit.

Once the first bad commit is identified, you can then try reverting that one commit and see if that fixes the bug.

How do I reverse bisect the upstream kernel?

A reverse bisect is just running the same methodology described above in a slightly different way, to narrow down a potential fix identified upstream. For example, let us assume you had a bug in Saucy kernel 3.11.0-15.23. Let us also assume the issue is not due to something in the downstream/Ubuntu kernel (configuration, out-of-tree patch, etc.). Then, you subsequently tested upstream kernel v3.13-rc5 and identified the issue doesn't happen. Mapping the Saucy kernel to upstream gives kernel 3.11.10. So, we now know the issue exists at least as early as mainline 3.11.10 and is fixed in v3.13-rc5. The next step is reverse bisecting upstream kernel versions.

Reverse bisecting upstream kernel versions

The first step is to find the last bad upstream kernel version, followed consecutively by the first good one. This is done by downloading, installing and testing mainline kernels from here. So, looking at the list we have:

v3.13-rc5-trusty/
v3.13-rc4-trusty/
v3.13-rc3-trusty/
v3.13-rc2-trusty/
v3.13-rc1-trusty/
v3.12.6-trusty/
v3.12.5-trusty/
v3.12.4-trusty/
v3.12.3-trusty/
v3.12.2-trusty/
v3.12.1-trusty/
v3.12-trusty/
v3.12-saucy/
v3.12-rc7-saucy/
v3.12-rc6-saucy/
v3.12-rc5-saucy/
v3.12-rc4-saucy/
v3.12-rc3-saucy/
v3.12-rc2-saucy/
v3.12-rc1-saucy/
v3.11.10.1-saucy/
v3.11.10-saucy/

The midpoint release is v3.12.1-trusty. One would continue to test the successive midpoints of each result, until we have the first bad version, followed consecutively by the first good version. Let us assume this was narrowed down to v3.13-rc4 as the bad, and v3.13-rc5 as the good. The next step is reverse commit bisecting upstream kernel versions.

Reverse commit bisecting upstream kernel versions

Now one will utilize the git skills learned above in a slightly different way. This is due to how git was designed with forward bisections in mind. However, one may utilize git to accomplish a reverse bisect. So, once Linus's development tree has been cloned:

git clone git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git  && cd linux

one would execute at a terminal:

git checkout v3.13-rc5
git bisect start
git bisect good v3.13-rc4
git bisect bad v3.13-rc5 

Please notice how v3.13-rc4 was marked good (even though we tested it to be bad) and v3.13-rc5 was marked bad. This is intentional. If the commit one builds against next works, one will mark this bad. One will continue this process until the fix commit is identified.

Testing a newly released patch from upstream

Lets assume you may have identified a newly released upstream patch that may address your issue, but hasn't been commited to the upstream development tree yet. Let us take as an example the following upstream patch noted here.

Start copying from the line where it notes:

diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c

to the last code line before the double dash:

 static int register_count;

Your patch file should be exactly as shown, honoring all spaces, or lack thereof:

diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c
index 995e91b..b3032f8 100644
--- a/drivers/acpi/video.c
+++ b/drivers/acpi/video.c
@@ -85,7 +85,7 @@ module_param(allow_duplicates, bool, 0644);
  * For Windows 8 systems: if set ture and the GPU driver has
  * registered a backlight interface, skip registering ACPI video's.
  */
-static bool use_native_backlight = false;
+static bool use_native_backlight = true;
 module_param(use_native_backlight, bool, 0644);

 static int register_count;

save this file to your Desktop as testfix.patch. Then execute at a terminal:

git config --global user.email "you@example.com" && git config --global user.name "Your Name" && cd $HOME && git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git && patch ~/linux/drivers/acpi/video.c ~/Desktop/testfix.patch && cd linux && git add . && git commit

Now in the new window type:

example

on your keyboard click Ctrl+O -> Enter -> Ctrl+X. Then, type at a terminal:

cp /boot/config-`uname -r` .config && yes '' | make oldconfig && make clean && make -j `getconf _NPROCESSORS_ONLN` deb-pkg LOCALVERSION=-custom && cd .. && sudo dpkg -i *.deb && git fetch origin;git fetch origin master;git reset --hard FETCH_HEAD

If for whatever reason the new kernel doesn't boot, it may not be you did something wrong, but just that it won't boot with this commit applied, or the configuration file choices were not tested against it.

External Links