Fuzzing

Black-Box Fuzzing Kernel Modules in Yocto

It’s been almost ten years since I wrote my thesis. It was about guided fuzz testing, and as usual, I have done zero days of actual work related to the topic of my thesis. However, I was feeling nostalgic one day and thought that I’d fire up a good ol’ fuzzer and see what I could do with it. In the end, not much. But it was fun to try to break something and relive the golden days of my youth.

To shake things up a bit, this time I tried fuzzing a Linux kernel module in a Yocto image, because it seems that I just can’t help but cram Yocto into every blog post I write. But let’s start from the beginning.

What Is Fuzzing?

Fuzzing is a type of testing where more or less broken input is used to check how a program behaves in unexpected situations. Usually, the process consists of collecting input samples, good or bad, running them through a fuzzer that does “something” to the sample, and then feeding this mystery sample to the program being tested. Well-behaving programs handle the erroneus input gracefully, but the badly behaving programs may hang, crash, or even worse, use the bad input like nothing is wrong.

Fuzz testing can be subcategorized into a few different groups: black-box, grey-box and white-box fuzzing. In black-box fuzzing there is no knowledge of the internals of the program, and no test feedback is used to guide the fuzzer. On the other hand, when using the white-box fuzzing the full knowledge of the program flow and protocols is available. In grey-box, there is no “deep” knowledge of the program, but for example code coverage may be used to guide the fuzzer.

As one can guess, a black-box fuzzer is the simplest to set up, but generally it is inefficient. White-box fuzzing is the opposite, where the initial effort may not even be worth it in the end. Grey-box, once again, lands somewhere in the middle. The instrumentation and feedback may require some effort, but it is (usually) worth it in the form of improved results.

Fuzzing on Embedded Target

Even though fuzzing can reveal some fascinating bugs, it’s worth noting that performing fuzzing on an embedded device may not always be a good idea. Usually, the efficiency of the fuzzing is directly proportional to the amount of tests being run per second. “Real” computers tend to be more powerful, resulting in more tests getting churned out compared to the embedded systems. The requirement for speed is especially true for black-box fuzzing which is basically brute forcing bugs out of the system. Therefore, you may want to consider fuzzing high-level application code on a more powerful computer, or in a virtualized environment to reveal more complex issues.

Fuzzing on the actual hardware makes the most sense in the following scenarios:

  • The code you’re testing relies on some architecture-specific functionality
  • The code relies on some hardware functionality that cannot be easily simulated
  • The hardware can generate samples and run target programs with “tolerable” efficiency
  • You want to do a quick smoke test type of fuzzing run

However, despite trying to talk you out of fuzzing on the target HW, I personally think it’s a good idea to give a quick black-box fuzzing session at least a try. It can reveal some low-hanging bugs, and setting up a black-box fuzzer takes little to no effort. Just be aware of the limitations, and the fact that it’s not going to be as efficient as it could.

Sometimes the bugs look for you though. Like ants at the picnic.

Finally, it’s worth knowing that things can go really wrong with fuzzing, so consider the potential risks, and if there’s a possibility of some hardware breaking. It’s usually unlikely, but aggressively fuzzing for example a poorly written device driver can result in bricking.

Radamsa

There are plenty of black-box fuzzers available for various purposes. Protocol fuzzers, web-app fuzzers, cloud fuzzers, etc. In this example, I’m using Radamsa. It’s a generic command line fuzzer that is simple to use yet it is fairly powerful. Not coincidentally, I also used it 10 years ago when writing my thesis.

Radamsa takes input either from stdin or from a file, and outputs fuzz either to stdout or to a file. This can then either be piped to the tested program, or the tested program can be instructed to open the file. Radamsa can also act as a TCP client or server, but I haven’t tried either of those so I can’t comment much on that. You can read more about Radamsa from it’s git repo.

The program is written in Owl Lisp, which gets translated into C, so the cross-compilation is quite straightforward once the Owl Lisp is set up. Because we don’t have to do any compilation time instrumentation for grey-box fuzzing guidance, the steps to build the fuzzer and the testable software are quite simple. The testable software in our case is going to be a kernel module. We still want to do some error instrumentation that will be covered in the next chapter, but since we’re fuzzing in kernel, it’s easier than one would guess (for once).

The Yocto recipe for building Radamsa can be found from meta-fuzzing repo I made to accompany this blog text.

Instrumentation

Breaking stuff with no consideration is rude. Breaking stuff and analyzing the results can be considered science. Therefore, to get something useful out of the fuzzing efforts we should figure out how to get as much information as possible from the system when it’s being bombarded. While black-box fuzzing doesn’t really need instrumentation, it makes fuzzing a lot more useful when we can detect more errors.

So, usually with all types of fuzzing some amount of compile-time instrumentation is used. This allows injecting extra code into the compiled binaries that may prove useful information when things start going wrong. A commonly used tool for this is AddressSanitizer (ASAN) and its fellow sanitizers. AddressSanitizer is a memory error detector that can detect things like use-after-frees, buffer overflows, and double-frees. As the nature of these bugs implies, it’s meant for C and C++ programs.

Sometimes I think I deserve happiness

Of course, this comes with a price. On average, AddressSanitizer tends to slow down the programs 2x. Who would have guessed that injecting code into binaries has some side effects? For debugging purposes, this is still usually acceptable.

The best part of the AddressSanitizer is that it’s readily available in the Linux kernel! To enable KernelAddressSanitizer KASAN, all that needs to be done is to set two configuration flags:

CONFIG_KASAN=y
CONFIG_KASAN_GENERIC=y

You can read more about the different KASAN modes from the KASAN documentation, but in summary, generic is the heaviest, but also the most compatible mode. There are faster modes, but they may be architecture and compiler specific. After enabling these flags, we can detect memory errors not only in the kernel but also in the modules we are building for that kernel.

Linux also has undefined behaviour sanitizer (UBSAN), Kernel concurrency sanitizer (KCSAN), and Kernel memory leak detector (no fun acronym), but let’s leave them out for now. They can be enabled similarly by toggling configuration flags, so no special work is needed from the driver side.

Example Module

To have something to fuzz, I wrote a simple Linux kernel module (with help from ChatGPT). The module creates two sysfs files, one that takes input and one that gives output. Anything written to the first file can be read from the second file. This allows passing data from user space to kernel space, and is a suitable input surface for fuzzing. sysfs interface isn’t maybe the most interesting one, because there is some processing that happens before the input written by user ends up in the kernel module, but it’s a simple test for verifying that the set-up works.

The code for this module can be found in meta-fuzzing repo as well.

Putting It All Together

Rest of the stuff is quite simple. If you’re using Yocto, add the meta-fuzzing layer to your Yocto build, add the kernel configuration into your kernel config, and install Radamsa (and the test module) to the image. If you’re using something else, then you do the same things but with a different system. Then, run the image, log into it, and run the following:

echo test | radamsa

Most likely something other than test gets printed. If not, give it a few more tries. If the output doesn’t look like t ejSt after a few tries something may be wrong.

To fuzz the actual test kernel module, you can run the following:

modprobe sysfs_attribute_echo
while true
  do cat /sys/kernel/sysfs_attribute_echo/output | radamsa > /sys/kernel/sysfs_attribute_echo/input
done

This probes the module, and then in a neverending loop reads the output from the kernel module, fuzzes it and passes it back to the input file. As an example of the sample file-based fuzzing, check this out:

mkdir /tmp/samples
echo aaa > /tmp/samples/sample-1
echo bbb > /tmp/samples/sample-2
echo ccc > /tmp/samples/sample-3
while true
  do radamsa -n 1 /tmp/samples/* > /sys/kernel/sysfs_attribute_echo/input
done

We create three sample files, and fuzz randomly one of them. Radamsa can output the fuzzed data into a file, but we still use stdout to send it to the kernel module. The samples in this case are quite trivial, but with more interesting sample files it would be possible to generate quite exotic fuzzed data.

For example, fuzzing a picture of “exotic beach” may result in something like this.

Does this find bugs from our module or kernel? No. Or at least it is highly unlikely. The kernel module itself is simple, and shouldn’t contain bugs (famous last words). Or, if there’s a bug, it’s either in the Linux kernel sysfs or kstrdup functions and those are already quite extensively tested (more famous last words). Unless there’s a regression of course.

However, this script demonstrates one admittedly simple approach of passing fuzzed data into the kernel space. The parsing of the data could be more exciting in a more complex module, which could in turn lead to actual bugs.

Closing Words

That’s all for this time. As shown here, the whole black-box fuzzing of the kernel can be straightforward. As mentioned about a dozen times in this text, the example was quite simple but demonstrates the point. The same ideas apply to more complex setups as well. The advantage of the black-box fuzzing is that it is easy to set up, so I recommend giving it a go and seeing what happens. Hopefully something exciting!