Aioli Audiostreamer: Moving the Sound

AI generated picture of an amplifier with raspberries

People need projects to consume their free time. I’ve lately felt that I want to actually try finishing a project (instead of just starting them), and that the project should be somehow related to audio, and it would be nice if it would have a real-world use, and it would use the old Raspberry Pis that have lying around. Plenty of requirements then. I think this is still better a better-formulated train wreck than an average customer project.

After considering few different options, I ended up attempting to create a multi-speaker streaming system named Aioli (so yeah, I started another project). This text is closer to a devlog than tutorial, but there will be open source code repositories in case you want to see how it’s done. Enough with the blabber, let’s move on.

Overview Of The Project

Basically, in this project I want to have one audio source, and the audio from that single source would get wirelessly transmitted to multiple speakers. To be more specific, in my case there’s one Raspberry Pi 4 connected to an external audio source, and then there would be other Raspberry Pi 2’s connected to the speakers. The Raspberry Pis in this scenario would handle at least streaming, networking, receiving the audio, and playing the audio. This graph attempts to explain the situation:

To start out the project I decided to focus on the streaming between Raspberry Pis because I didn’t feel masochistic enough to start working with Bluetooth yet. Everything is all fun and games until Bluetooth is added into the mix, and I want to have a bit of fun and games.

Today’s focus is this part to be exact

Obviously, the first thing to do is to create a custom Yocto distro, because every self-respecting hobby project needs its own Linux distribution. Perhaps further down the line this distro can contain some useful configs and other things that actually justify its existence, but for now, it’s just a renamed example Poky distro.

Creating The Network Of Raspberries

To get the Raspberry Pis talking to each other the first step is getting the devices connected to the same LAN. I wanted to use WLAN to not have cables around the house. Using ethernet cables would defeat the whole point of the system anyway as I could just use audio cables then. I considered also an ad hoc network but decided to use WLAN to keep things familiar for now. The Raspberry Pi 4 I own does have an internal Wifi chip, so that was easy to sort out, but the two Raspberry Pi 2’s did not. I had one Wifi dongle that worked out of the box, but another dongle required some extra work. You can read about it from my previous blog post if you’re interested.

After getting the hardware sorted out it was time to get the devices actually connected to WLAN. For that purpose, I added wpa_supplicant to the distro. wpa_supplicant is a program that in layman’s terms “connects the device to wifi” (or so I’ve understood as a layman). A properly configured supplicant that launches during boot should in theory automatically connect the device to WLAN. Surprisingly enough, it usually does. Following simple configuration in /etc/wpa_supplicant.conf added during a build to a Raspberry Pi does the trick:

network={
    ssid="WLANname"
    psk="SecretPassword"
}

This of course means that you have a statically defined network you want to connect to, and the password is stored in plaintext in the device. Both are bad things for different reasons, but they’ll do for now because it’s the simplest solution. This simplicity will be fixed later on in another text. If you have a WLAN network without a password or want to use a calculated key instead of a plaintext password, you can read more about wpa_supplicant from Arch wiki. It’s a good read. Pay attention to the quotation marks in psk-variable, they caused a lot of headache to me.

With quotation marks the value is a plaintext password, without them it’s a calculated key value. Makes “sense”.

After the devices wirelessly connected to the router, I gave them static IP address leases to make the development somewhat easier. I also ran a quick ping test to check that the Raspberry Pis can reach Google and each other before proceeding.

Moving The Audio Bits

Making the audio streaming work was actually fairly simple because there already is an open-source solution, as there usually is. GStreamer is an “open source multimedia framework”, which can mean many things. This is quite fitting because GStreamer does many things, and with the help of its plugins, it can do pretty much anything you can dream of. Assuming your dreams revolve around handling and processing multimedia.

My dream was to find a way to stream audio over IP network. And dreams, they sometimes do come true. Actually, a bit too much so, it was slightly difficult to find the best options for streaming the audio with all encoding options, protocols, and what-not. I’m still not sure I picked the right things. To keep the prototyping fast I worked with command line tools provided by GStreamer (as opposed to using the API, which may be worth looking into in the future).

GStreamer works with pipelines. Pipelines have sources where the media originates from, sinks where the media ends up, and parsers, encoders, and other types of things in between that manipulate input and pass it forward. For example, here’s a simple pipeline that reads an audio file, and then parses, converts, resamples, and outputs it to the appropriate default sink:

gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! autoaudiosink

This command may result in a sound being output from your speakers. Quite often not. Depends on what your default ALSA output device is, if you’re using PulseAudio, and if it’s the third Tuesday of the month. In the case of Raspberry Pi, the default output device is the HDMI audio, and I’m not using PulseAudio, and it’s not the third Tuesday today, meaning that I actually got some sound out from a television connected to the HDMI port. If you want to get the audio output from the Raspberry Pi’s headphone jack, you can be a bit more specific about the sink:

gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! alsasink device=hw:1
# use "aplay -l" command to list the available ALSA devices

To get the audio sent over the network we can use the RTP protocol that’s meant for delivering audio and video. Basic GStreamer functionality can be easily extended with plugins, and as it turns out, there exists a plugin for RTP. It’s weird how these things work out nicely. Almost like someone has had the same ideas before me. Now we can package the audio to 16-bit RTP payloads, and instead of using an alsasink we can use a udpsink (from another plugin) to output the stream to a target in a network instead of an audio device.

gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! rtpL16pay ! udpsink host=192.168.1.182 port=5001

Then, the intended receiver of the stream can use udpsrc instead of filersrc to read the stream, decode, and deliver the contents to its own audio sink. Simple as.

gst-launch-1.0 udpsrc port=5001 ! 'application/x-rtp,media=audio,payload=96,clock-rate=44100,encoding-name=L16,channels=2' ! rtpL16depay ! audioconvert ! autoaudiosink

To get the audio sent to multiple devices, a multiudpsink can be used on the sending side. The receiving end still uses the same command:

gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! rtpL16pay ! multiudpsink clients=192.168.1.182:5001,192.168.1.183:5001

In theory, we could use multicast streaming instead of multiple streams but for some reason I couldn’t get it working. Most likely it had something to with the third Tuesday of the month. I couldn’t even complete a simple multicast test on my network of Raspberry Pis, so I guess something is wrong with my setup. For the sake of completeness, AFAIK these commands (should (in theory)) work, but don’t. I’ll look into this later on because multicasting seems like a more sensible approach to this problem:

# Controller command
gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! rtpL16pay ! udpsink host=224.1.1.1 auto-multicast=true port=3000

# Speaker command
gst-launch-1.0 udpsrc multicast-group=224.1.1.1 auto-multicast=true port=3000 ! 'application/x-rtp,media=audio,payload=96,clock-rate=44100,encoding-name=L16,channels=2' ! rtpL16depay ! audioconvert ! autoaudiosink
Considering the amount of multicast memes floating around the internet, I’m not the only one having issues with it.

By using these commands we can send the audio over network from the controller device to the speaker devices. However, this is still a bit cumbersome, because we need to manually run the gst-launch-1.0 commands, figure out the intended receivers & their IP addresses, and so on. Later on I plan to introduce a manager process that’s dynamically able to find the clients in LAN and control the streaming, but that’s a topic for another text.

There’s a recipe for GStreamer and its plugins in Yocto, so to get these things installed into the new custom distro is just a matter of adding a few packages. It’s almost simpler than using a package manager. At least if you’ve spent the last five years learning the ins and outs of Yocto, and don’t need to install them during runtime. Something like this should do the trick:

IMAGE_INSTALL:append = " \
    gstreamer1.0 \
    gstreamer1.0-meta-base \
    gstreamer1.0-meta-audio \
    gstreamer1.0-plugins-good-udp \
    gstreamer1.0-plugins-good-rtp \
"

Plugins are sorted to good, bad, and ugly (I guess it’s no big surprise that bluez plugin is “bad”). To figure out which group plugin belongs to you can check the documentation. The documentation is quite good by the way, I recommend reading it. For example, udp plugin page contains information about the pipeline elements it provides, and also mention which group it belongs to.

That mostly covers all for this text. We’re now able to send sound over the network from one device to another. Next time we’ll stop this goofing off, and get painfully serious by adding Bluetooth to the system, and instead of using sample audio files we’ll actually stream something from a phone.

You can find the top-level repo-tool manifest repository from here. Please note that the progress of the project is a bit further than what’s presented in this blog text, and the progress is also “a bit” all over the place, so the manifest repository and the subrepositories contain plenty of spoilers and confusion.

One more question remains: why the name Aioli? Well, it kinda sounds like audio combined with I/O, and I like garlic flavoured condiments. That’s as good reason as any.

Open-source contribution: RTL8821AU driver recipe

This is a story of how I became a useful member of society by doing my first open-source contribution.

It all began one fateful afternoon, when I purchased a TP-Link Wifi dongle, thinking that it would allow me to connect my old Raspberry Pi 2 wirelessly to the internet. It was running my own Poky-based distro, but what could really go wrong with random USB devices and Linux?

Well quite a plenty really. I plugged the device in, but I couldn’t connect to the highway of data. No delicious internet cookies for me. Not even a blinking led.

To begin troubleshooting the issue, I tried checking if the network interface was seen by kernel by running both ifconfig -a and ip link show. No wlan devices were found. Some googling suggested running lsusb. That showed the device, which at least proved that it wasn’t broken and was recognized by kernel. Some sort of network driver was clearly needed.

Bus 001 Device 004: ID 2357:011f TP-Link 802.11ac WLAN Adapter 
Bus 001 Device 003: ID 0424:ec00 Microchip Technology, Inc. (formerly SMSC) SMSC9512/9514 Fast Ethernet Adapter
Bus 001 Device 002: ID 0424:9514 Microchip Technology, Inc. (formerly SMSC) SMC9514 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Finding the correct driver turned out to be a bit tougher than expected. First I tried googling the name of the wifi dongle suffixed with “driver”. Bad Idea. This led to a lot of ancient forum posts that suggested all kinds of Realtek drivers for (almost) similarly named devices that were installed by enabling a variety of kernel configuration options. None of the drivers worked.

After some more of the furious googling I found out that the wifi dongle I bought required an out-of-tree kernel module instead. That meant I couldn’t just enable a kernel configuration to build the driver in my distro. Finding the correct driver was another trial and error type of affair. Someone suggested a driver for the 8812au chip. It did not work but helped me to find a correct trail.

Fortunately there’s a lot less diseases on this trail.

RTL8812AU driver repo contained a file supported-device-IDs that expectedly did not contain the device id output by lsusb. However, that gave me an idea (that I really should have gotten from the beginning): googling “driver 2357:011f”. Who would have guessed that searching for a driver with an exact device id instead of vague product names would yield the correct driver(s)? This search also helped me to find the name of the Realtek chip, 8821au, which I confused plenty of times with 8812au. I’m not sure if this info would have been available on the manual of the dongle because I did not read it.

After finding the driver & chip I connected some dots and realized that there actually is a kernel configuration driver named CONFIG_RTL8XXXU that I tried. Despite what the name suggests, it does not work with rtl8821au.

Once the correct driver was figured out it was time to add it to the Yocto build. Some more googling revealed that there is a meta layer called meta-rtlwifi for these Realtek out-of-tree modules. Unfortunately, it didn’t contain the RTL8821AU driver. Fortunately, I’ve been using git at work so I could fix that myself. You can see where this is heading.

So I took the RTL8812AU driver recipe as I suspected that it should mostly work, and updated the relevant parts, i.e. the repo to fetch the driver from. I was pleasantly surprised that the build worked just like that. Even more shocking was that the module worked as well. After that, it was just a matter of a pull request to get the driver added to the meta-layer alongside the other friendly drivers.

There were actually multiple drivers available for 8821au. At least morrownr, ulli-kroll and ivanborislav provide RTL8821AU drivers. In the end, I chose morrownr driver because their driver worked satisfactorily out of the box and their driver is also used for 8812au. I first gave a shot at ivanborislav driver but it filled my TTY with logs about power save mode. Most likely a configuration mistake from my side, but usually a thing that works without extra tinkering is the better choice.

It’s almost weird that there’s a meme for literally everything.

That’s how I got quite familiar with my wifi dongle, and made my first open-source contribution in the process. I also learned something. I’m not yet 100% sure what that is. Perhaps it’s that the device id is quite important when trying to find a suitable driver. And googling can give all kinds of interesting useful information. Until next time!

Yocto Hardening: Finding & Fixing CVEs

Find all of the Yocto hardening texts from here!

Last time when we were talking about Yocto hardening we focused on setting up extra users to prevent unauthorized root access and to avoid situations where the users would have too open permission sets. Getting these right is a good low-hanging fruit, but there are plenty of other things to consider for hardening the system. One of the most important things to do is to eliminate (or at least minimize) the security vulnerabilities weakening the overall system defenses.

What are CVEs?

Despite what one may be inclined to think, open-source code is not always perfect. It may contain bugs, which is a bit annoying, but it may also contain vulnerabilities, which is potentially a bit dangerous. These can lead to denial of service attacks, data leaks, or even unauthorized accesses & system takeovers. Not good. Very bad. The usual stuff.

Panic Omg GIF - Find & Share on GIPHY

CVE (Common Vulnerabilities and Exposures) is a system maintained by The United States National Cybersecurity FFRDC and operated by Mitre Corporation. The system contains information about, well, vulnerabilities and exposures. “CVE” as a noun can also mean vulnerability or exposure. Or at least I often use the acronym in with that meaning. Checking your Yocto system against this database of vulnerabilities can help to detect security issues lurking in the open-source code components.

Checking vulnerabilities in Yocto

Every CVE doesn’t necessarily lead to unauthorised root access, and there may be vulnerabilities that don’t have a CVE identifier. However, scanning through the CVEs in your system should provide a fairly good image of how vulnerable your system is against possible CVE exploits. At least at the time of the scan, there are constantly new issues popping up which means you have to keep doing it often. Also it’s good to keep in mind that this kind of a scan does not check for other kind of weaknesses in the system. But at least performing the CVE check in the Yocto system is fairly simple. Something is simple in Yocto, what?

Mood What GIF by NBC - Find & Share on GIPHY
My whole worldview is being shattered.

In all simplicity, just add this line to local.conf to check your packages & images for common vulnerabilities.

INHERIT += "cve-check"
# I had some issues with slow download speeds. If you get weird timeouts
# when fetching issue database, try adding this line as well:
CVE_SOCKET_TIMEOUT = "180"

Now when you run the build you’ll most likely end up with more warnings than before. Yocto’s CVE checker prints a warning to the build output for every vulnerability it detects, and depending on the complexity and age of your system your terminal may end up quite yellow. It’s more useful to inspect the CVE reports than to try and remember the build output as it scrolls by. These reports can be found in <build-folder>/tmp/deploy/cve folder, and we’ll inspect them a bit later. It’s worth knowing that this CVE checker may not be perfect, meaning that there may be some vulnerabilities lurking around that it does not detect. But it’s at least better than going through all the source code eyeballing it.

Analysing the results

Ok, now you’ve proved to yourself that your system has holes that need to be plugged. How to get over this uneasy feeling of constant doom looming over your system? The first thing is listing out all the CVE IDs and checking their status & severity to get a bit more stressed. Or relaxed, depends a bit on how well your system is maintained.

Sponge Bob Reaction GIF - Find & Share on GIPHY
Let’s be real, this is most likely going to be the most accurate reaction.

Further information for the CVEs can be found on MITRE’s CVE site, and some more complementary information can be found on NIST’s (National Institute of Standards and Technology) NVD (National Vulnerability Database). Plenty of acronyms. NVD is a system containing the same CVE IDs as MITRE’s site, but with severity scores and usually some extra information. Both are definitely useful, but for practical purposes NVD tends to be more useful. MITRE’s site can contain some new info not yet present in NVD, but NVD contains the information in a more easily digestible format.

Let’s take CVE-2023-22451 as an example. CVE listing for the vulnerability shows a short summary, affected versions, and a bunch of links. NVD entry has the same information, severity score, modification date, and a bit more information about the links so you won’t have to guess which link is the patch and which is some advisory (this can be useful on bigger issues where there are a dozen of links). For what it’s worth, Yocto’s CVE checker uses NVD’s database.

Now that we’ve gone through some background information, we can investigate the results of the CVE check. The aforementioned <build-folder>/tmp/deploy/cve contains two summary files: cve-summary and cve-summary.json. These contain all the detected issues, both patched and unpatched, and the format of the files isn’t all that intuitive for getting an actual summary, so I wrote a simple Python-tool for summarising the results. The design is very human:

# Replace the file path with your path
python3 ./cve-parser.py -f build/tmp/log/cve/cve-summary.json

Very easy to use. By default the parser will output the unpatched CVEs, and from each CVE it will print the package name, CVE ID, CVSS scores, and the link to the NVD site for more info. There is more information per issue available, the results can be sorted, and the ignored & patched issues can also be printed if needed. To get the complete list of options, run:

python3 ./cve-parser.py -h

I’m not going to document the whole program here in case I decide to change it at some point. If your boss asks you to get the total amount of unpatched CVEs in the system, you can run the command below to get “extra info” from the parser. Currently, “extra info” just contains the number of displayed issues, but if you’ve got ideas for it feel free to let me know:

# -u display unpatched issues (default behavior, but added just in case)
# -e displays extra info
python3 ./cve-parser.py -u -e -f build/tmp/log/cve/cve-summary.json

If your boss asks how many of them actually need to be fixed, use this to sort the CVEs by severity score and count how many have a score higher than 8.3

# -u display unpatched issues
# -ss2 sort by CVSSv2 score
python3 ./cve-parser.py -u -ss2 -f build/tmp/log/cve/cve-summary.json

Just kidding, please fix them all.

Fry Reaction GIF - Find & Share on GIPHY

Fixing the CVEs

After you’ve justified your fears of getting hacked by realizing that your system has seven CVEs with a CVSS score higher than 9.5 (three of them in the kernel) and a few dozen more with just a slightly lower priority, it’s time to ask the big question: what can I do? What can I possibly do? This didn’t help with the feeling of constant doom at all.

The first step is going through the list of unpatched issues. There tend to be quite a few false positives that are not applicable to the current system. For example, a brand new Poky build reports an unpatched CVE-2017-6264 that’s been around six years already. This is a vulnerability in the NVIDIA GPU driver, and it’s applicable to Android products. Most certainly it’s a false positive in your Yocto system, but since the bad code is present in the source code, it’s reported as unpatched. You can ignore these false positives by adding the line below somewhere to your build configuration (local.conf works, but it’s maybe not the best place for it):

# The format is CVE_CHECK_IGNORE="<cve-id>", e.g.
CVE_CHECK_IGNORE="CVE-2017-6264"
# In the older Yocto versions this was called CVE_CHECK_WHITELIST
# so if ignore doesn't work, try the old whitelist variable

After ignoring the false positives fixing the rest of the CVE issues is easy, in theory. Just keep the Yocto at its newest (or relatively new LTS) version. This should keep the packages fresh. Sounds simple, but unfortunately this process tends to cause a big headache with incompatibilities and such. However, that’s the best advice I can give.

If there is a need to update the recipes even further than officially supported, or if you want to update a single recipe, copy the recipe in question to your own meta-layer, update the recipe version in the filename and fix SRCREV to match. When hacking around like this remember to hope that nothing falls apart in the system even more than during a regular Yocto version update.

Sometimes, updating the packages is not an option. Perhaps updating the package breaks the build, a newer version of the library isn’t compatible with your application, or there is some other Perfectly Good Reason you’re not allowed to do that. That’s when it’s time to do some patching to make the confusing build system even more spaghettified.

Noodles Spaghetti GIF - Find & Share on GIPHY

Basically, this approach consists of heading to the NVD entry for the CVE, checking if there’s a patch for the vulnerability, and if there is, porting the patch to the Yocto build. For example, the aforementioned CVE-2023-22451 mentions this commit as its patch in the NVD entry. Copy the contents of the commit to a patch file, create bbappend file for the recipe and add the patch to the recipe with that bbappend.

If there’s no patch you can wait. Or if you’re like a kid on Christmas and can’t wait, you can try digging the package’s git repositories for the corrective commit before it’s been released. It ain’t fun, and rarely it’s a productive use of time, but every now and then some fixes can be found like this before they are made official.

That’s a short explanation of how you can check and fix the CVE vulnerabilities in your system. The theory of the process is fairly simple. However, it tends to get a bit more complicated in the actual world, especially when trying to update older legacy systems where the stale, non-updated packages contain more patches than original code. But I guess that’s the problem with real life, it tends to mess up good theories.

You can find the next part about Yocto hardening here. It’s about firewalls.

How to build Yocto with Apple Silicon

If you’re like me, you have more money than brains, and not too much of either really. This can lead to situations where you end up buying a MacBook without actually checking if it supports Yocto builds, the one thing you inexplicably like to waste your little free time on. As it turns out after getting the nearly 2K€ laptop, builds on Mac are not supported. But don’t worry, where there is a problem, there usually is a convoluted solution.

Rube Goldberg Win GIF by Millions - Find & Share on GIPHY

As one might expect, the answer is virtualization. Specifically Docker virtualization this time. Writing this kind of instructional text is becoming my second nature, earlier I wrote how to build Yocto on Windows/WSL, and now how to build the damn thing on Mac. One day I’ll get an actual Linux computer suitable for this kind of development work, but not today. I’m out of money at the moment. Anyways, to the actual guide. I’ll warn you beforehand that my knowledge of Docker is quite superficial, but I’ll try to explain things as well as I can.

Installing Docker

When doing Docker builds the first step obviously is to install Docker. Get it from here and come back when you’re done.

Gonna Go Homer Simpson GIF - Find & Share on GIPHY
Don’t worry, they definitely are.

Getting correct images

Now, the actual tricky part: getting the Docker images suitable for building Yocto. Docker images are used to create the containers that are the virtualized environment we can do the builds in. Yocto Development Tasks Manual mentions CROPS (CROss PlatformS) Docker images that can be used for building in non-Linux hosts. CROPS itself also has this nice Mac guide that almost works.

Well, I guess the manual works for the older amd64 MacBooks because the pre-built Docker images are only available in amd64 format. Since Apple Silicon is arm64, Docker gives a nice warning that the emulated image may or may not work, and spoiler alert, it does not. Attempting to run bitbake after following the guide gives the following error:

OSError: Cannot initialize new instance of inotify, Errno=Function not implemented (ENOSYS)

However, I recommend keeping that CROPS guide nearby because we are going to follow it once we get the suitable images built. The first step of the journey is pulling the Docker image source repositories for the two images mentioned in the CROPS Mac guide:

% git clone https://github.com/crops/samba.git
% git clone https://github.com/crops/poky-container.git

The first one is responsible for creating a samba file server to provide the contents of the build container to the Mac file system via a Docker volume, and the second one is the actual builder image. Building the samba image is simple enough:

% cd samba
% docker build . -t crops/samba:selfbuilt
% cd ..

The poky-container image is a bit trickier. It attempts to fetch an amd64 operating system image provided by CROPS as its base layer. Instead of using that image, we need to find something else that’s more suitable for us and arm64.

Season 3 Episode 20 GIF by SpongeBob SquarePants - Find & Share on GIPHY
Not much.

Docker Hub is a site containing Docker images for a variety of uses. Among many others, it also contains the CROPS amd64 images that we are not able to use. By searching for “Yocto”, and limiting the results to arm64 architecture we can find quite a few results. However, none of these (at least at the time of writing) fit into the CROPS workflow or are up to date, meaning that they won’t work with the newer versions of Yocto. Therefore we need to build the operating system base image by ourselves.

The Dockerfiles used to build the images are open source, but unfortunately, the scripts in them use GNU versions of some commands, meaning that the scripts won’t work with the BSD commands Mac has. But fear not, I made a fork for building the base images on Mac! It works at least for the Ubuntu 22.04 image, I was a bit lazy and didn’t check the dozen other options. To build the Ubuntu base image, run the following commands:

git clone git@github.com:ejaaskel/yocto-dockerfiles.git
cd yocto-dockerfiles
export REPO=ejaaskel/yocto
export DISTRO_TO_BUILD=ubuntu-22.04
./build_container
cd ..

This will create two images: ejaaskel/yocto:ubuntu-22.04-base and ejaaskel/yocto:ubuntu-22.04-builder. Now we just need to go to the CROPS poky-container Dockerfile and replace their base image with the one we just built. After that, we should be able to build the poky-container as an arm64 image and get to actually building Yocto.

% cd poky-container
## Open your favourite text editor (it better be nvim),
## and replace these lines in Dockerfile:
ARG BASE_DISTRO=SPECIFY_ME
FROM crops/yocto:$BASE_DISTRO-base
## With these lines to specify the distro version and 
## use the Docker image we built:
ARG BASE_DISTRO=ubuntu-22.04
FROM ejaaskel/yocto:$BASE_DISTRO-base
% docker build . -t crops/poky:selfbuilt
% cd ..

Now we’re again on track to follow that CROPS guide. We just have to suffix every mention of the CROPS images with :selfbuilt so that we’ll use the built images and not pull images from Docker Hub. So whenever there’s a mention of crops/samba in the guide, write crops/samba:selfbuilt, and when there’s a mention of crops/poky, use crops/poky:selfbuilt instead.

Building Yocto

If you followed the instructions in the CROPS guide you should end up with an Ubuntu terminal in /workdir and a Finder window that’s connected to the same location in the Docker container. After that, it’s just a matter of following Yocto’s Quick Build manual or building something else that’s actually interesting. Once the build finishes, you can access the files in Finder as you normally do, or if you prefer to use the command line, docker cp should do the trick as well:

# Use docker ps to get crops/poky:selfbuilt container id
% docker ps
% docker cp <container-id>:workdir/<whatever-you-want-to-copy> .

By the way, if you stick to docker cp for moving stuff around, you don’t actually need to start the samba container. Saves a bit of effort.

Fight Club Work GIF - Find & Share on GIPHY
of a copy

If you happen to need a root shell in the container for some reason, like installing packages, you can open it like this:

# Use docker ps to get crops/poky:selfbuilt container id
% docker ps
% docker exec -it --user=root <container-id> bash

If (and when) you run out of disk space, you can increase the virtual disk size from the Docker Desktop program. Select the cog from the upper right corner to enter settings, navigate to Resources->Advanced and from there you can increase the virtual disk limit. I’m quite sure there is a way to do this in the command line as well, but hopefully this isn’t a task that I’ll have to do so often that I’d need to figure it out.

Can you see how artistic those red circles are? All thanks to the new MacBook.

That’s all for this time! Hopefully, this helped you to get started with Yocto builds on Apple Silicon a bit quicker. Fortunately, this wasn’t really anything groundbreaking, mostly it was just a matter of combining a few different information sources and a bit of script editing. The biggest sidestep from the CROPS instructions was building the Docker images instead of using the ready-built ones. But for someone who’s new to Mac & Docker, it may take a surprisingly long time to figure this out. At least it did for me. Writing this also helped a bit with my buyer’s remorse. Until next time!

Unit testing audio processors with JUCE & Catch2

Testing. The final frontier. Or so it often feels. It’s the place where no man boldly goes, it’s the place where they tend to crawl to when being forced to do so after the test coverage checker lets them know that there’s less than 60% of the line coverage. Hopefully, this is just a tired stereotype, because testing is mighty useful in all kinds of applications for quite obvious reasons: it saves time, helps catch bugs, and a good & passing test set gives a better peace of mind than a ten-minute mindfulness session. In this text, I’m writing a bit about unit testing in audio processing applications and presenting my small example plugin that contains some unit tests. It’s basically my upcoming ADC 2022 talk in a text format, unless something surprising happens in the upcoming week. Let’s hope not, I react poorly to surprises.

Surprise Seinfeld GIF - Find & Share on GIPHY

Testing

Unit testing is a type of testing where small sections of the program code are individually and independently scrutinized. Usually, this is done by the means of feeding known input to the functions being tested, and comparing the outputs, function calls, etc. against an expected set of values. Basic stuff, really.

However, audio applications are slightly different, or at least have some features that don’t fully follow this idea of known inputs and outputs. For example, think about a high-pass filter that adds some “character” to the signal it filters. The character here means something extra being added to the audio signal during processing. Depending on the type of “character” we may know the output only partially. Sure, the audio gets filtered according to the filter parameters, but the processing may also add some random elements, like a tiny smidgen of noise to keep things exciting. In such a situation we can’t directly compare the output of the processing function to some expected value, because the expected value is only partially known.

Of course, the test in such a case could be reduced to smaller components that have fully predictable outputs. And it could even be argued that testing these small components is the whole idea of unit testing. But even if we break down the filter into the smallest possible pieces, there still would be that one noise generator element that can’t be fully predicted (unless using the same random seed every time). How can we know what to expect when we only have a general idea of what something is supposed to sound like, but it can’t be exactly defined? As a sidenote, this kind of vague hand-waving describes my music-making fairly well.

Arm Flailing GIF - Find & Share on GIPHY
Basically me whenever I’m asked to explain anything (please don’t ask difficult questions)

Math to the rescue

I dislike math as much as anyone else, but no one can really disagree with the fact that it’s definitely useful. Without it, we wouldn’t have trebuchets. And from what I’ve heard, mathematics plays some role in most of the computer systems as well.

Fast fourier transform is a magica… mathematical algorithm that can perform discrete fourier transformations. Fourier transformation on the other hand can convert time or space functions to their frequency components. Or, if these words mean nothing to you, it can convert this kind of waveform image (i.e. time domain representation):

Difficult to say what it sounds like, but at least it’s loud.

Into this kind of frequency spectrum image (frequency domain representation):

Ah, it’s a sine sweep.

How does this happen? I wish I understood. Wikipedia article for Fourier transform is filled with sentences like “for each frequency, the magnitude of the complex value represents the amplitude of a constituent complex sinusoid with that frequency, and the argument of the complex value represents that complex sinusoid’s phase offset” and to be honest, sentences like this make me scared. For all I know, it could say that there’s a tiny wizard somewhere in the computer that has nothing better to do than some signal processing. Although, all the signal-processing people I know tend to be wizards, so maybe there’s some truth in that.

However, with the FFT we can get the characteristics of the audio, like what frequency range has a lot of energy, or where there is no energy at all. Audio signals are signals, and as such, they have some amount of energy. These kinds of frequency characteristics can be compared in non-exact, fuzzy ways by setting thresholds for the allowed energy amounts for different frequency ranges.

For example, if we have a simple gain function, we can use FFT to check that the audio signal energy is indeed increasing or decreasing. Or if there’s a filter that needs to be tested we can check that some frequency bands have no energy to ensure that the filter doesn’t allow frequencies to pass from where they shouldn’t. Or if there’s a white noise generator, it can be checked that there is energy across the whole frequency range. Or with a synthesizer, we can check that there are no unexpected artefacts in the signal. Without fast fourier transform finding out these kinds of things automatically can be tricky.

Pamplejuce

Integrating a unit testing framework with an audio development framework can be challenging. It took me surprisingly much googling to find anything useful on the topic. There were plenty of tutorials/advertisements about building different unit testing framework demos. I’m using JUCE, and it has some level of unit testing support, but that seemed insufficient. There also were conference talks confirming that yes, you indeed should test your software even if it handles audio signals, that’s not an acceptable reason to skip testing even though that’s a tempting thought.

Finally, after failing to do what I wanted with JUCE’s unit tests, and failing to integrate two different unit testing frameworks into my JUCE plugin, I came across Pamplejuce. It’s a template project using CMake workflow, and more importantly (at least for me), it integrates Catch2 unit testing framework to JUCE. It’s good that someone around here is a competent developer.

Catch2 is a unit testing framework. I don’t have much else to say about it. It’s being actively developed and it has all the features I could ask for. This isn’t all that much, I have about as many requirements for unit testing frameworks as I do for hotel rooms: hotel rooms should have a bed, a shower and a lockable door. Unit testing frameworks should have the possibility to compare two things and an option to write custom matchers. And I should be able to integrate them into my projects. The last one was surprisingly rare.

90S Internet GIF - Find & Share on GIPHY
I couldn’t yet find “Unit testing framework integration for dummies”, but once I find one I’ll be sure to get one.

FilterUnitTest

To test out Pamplejuce I created a new imaginatively named project FilterUnitTest from the template. To have some actual audio processing to test I made a simple highpass filter plugin using JUCE’s LadderFilter. No GUI, just simple filtering pleasure. Well, it also has a parameter for the wet amount. I’ll give a warning that there’s still plenty of refactoring & warning fixing to be done in the FilterUnitTest, but I’m currently working on it whenever I have the time. I’m quite certain that the tests at least leak memory, but I haven’t yet gotten around to fixing that (I know, I know, it isn’t very C++17 to have memory leaks).

So, after the base of the project was done, it was time to finally implement some tests. First is the dummy test already part of the Pamplejuce template, which simply checks that the name of the plugin can be fetched and that it is actually the expected name. The best test is the one someone writes for you. The test cases presented in this blog text can be found from Tests/PluginBasics.cpp file in the FilterUnitTest repository.

TEST_CASE("Plugin instance name", "[name]")
{
    testPluginProcessor = new AudioPluginAudioProcessor();
    CHECK_THAT(testPluginProcessor->getName().toStdString(),
               Catch::Matchers::Equals("Filter Unit Test"));
    delete testPluginProcessor;
}

To verify the actual functionality of the program, I identified two things that need to be tested:

  1. Testing that the filter performs filtering to the audio signal
  2. Testing that the wet parameter controls the amount of applied filtering

The first test is quite simple: create an audio signal buffer, or read it from an audio file. Run it through the audio processing function, and check that the output contains less energy than before filtering. This of course requires writing a custom matcher, more on that a bit later. If you’ve set your filtering function in stone, you could consider storing the output from one of the test runs and comparing the test results to that instead. During the development phase, it may be easier to do some non-exact comparisons of the FFT values.

If you choose to go the exact value comparison route, you can check out the writeBufferToFile and readBufferFromFile helper functions in Tests/Helpers.h. They serialize and deserialize an audio buffer to/from a file. These helpers can be used to create the exact expected values, and they can also be used to fetch the expected value and compare the output to it. This dummy test basically writes a random buffer to a file, reads the file and ensures that the two buffers have identical contents.

TEST_CASE("Read and write buffer", "[dummy]")
{
    juce::AudioBuffer<float> *buffer = Helpers::generateAudioSampleBuffer();
    Helpers::writeBufferToFile(buffer, "test_file");
    juce::AudioBuffer<float> *readBuffer = Helpers::readBufferFromFile("test_file");
    CHECK_THAT(*buffer,
               AudioBuffersMatch(*readBuffer));
    juce::File test_file ("test_file");
    test_file.deleteFile();
}

As you can see, this type of test requires a custom matcher, AudioBuffersMatch. As does the FFT comparison, and any other custom comparison. For FilterUnitTest, I wrote four different types of comparators, these can be found from Tests/Matchers.h:

  • Audiobuffers are equal
  • Audiobuffer has higher energy than another audio buffer
  • Audiobuffer has a maximum energy of N in its frequency bands (N can vary between different bands, and the check for a band can also be skipped)
  • Audiobuffer has minimum energy of N in its frequency bands (Same here)

The second approach of using FFT to ensure that the audio buffer has lower energy after filtering can use a combination of the second and third matcher. By combining these two, we can ensure that the total energy of the signal is indeed lower and that the amount of lower frequencies is within a certain limit:

TEST_CASE("Filter", "[functionality]")
{
    int samplesPerBlock = 4096;
    int sampleRate = 44100;

    testPluginProcessor = new AudioPluginAudioProcessor();

    //Helper to read a sine sweep wav
    juce::MemoryMappedAudioFormatReader *reader = Helpers::readSineSweep();
    juce::AudioBuffer<float> *buffer = new juce::AudioBuffer<float>(reader->numChannels, reader->lengthInSamples);
    reader->read(buffer->getArrayOfWritePointers(), 1, 0, reader->lengthInSamples);

    juce::AudioBuffer<float> originalBuffer(*buffer);

    //Dismiss the partial chunk for now
    int chunkAmount = buffer->getNumSamples() / samplesPerBlock;

    juce::MidiBuffer midiBuffer;

    testPluginProcessor->prepareToPlay(sampleRate, samplesPerBlock);

    //Process the sine sweep, one chunk at a time
    for (int i = 0; i < chunkAmount; i++) {
        juce::AudioBuffer<float> processBuffer(buffer->getNumChannels(), samplesPerBlock);
        for (int ch = 0; ch < buffer->getNumChannels(); ++ch) {
            processBuffer.copyFrom(0, 0, *buffer, ch, i * samplesPerBlock, samplesPerBlock);
        }

        testPluginProcessor->processBlock(processBuffer, midiBuffer);
        for (int ch = 0; ch < buffer->getNumChannels(); ++ch) {
            buffer->copyFrom(0, i * samplesPerBlock, processBuffer, ch, 0, samplesPerBlock);
        }
    }

    //Check that originalBuffer has higher total energy
    CHECK_THAT(originalBuffer,
               !AudioBufferHigherEnergy(*buffer));

    juce::Array<float> maxEnergies;
    for (int i = 0; i < fft_size / 2; i++) {
        //Set the threshold to some value for the lowest 32 frequency bands
        if (i < 32) {
            maxEnergies.set(i, 100);
        }
        //Skip the rest
        else {
            maxEnergies.set(i, -1);

        }
    }
    //Check that lower end frequencies are within limits
    CHECK_THAT(*buffer,
               AudioBufferCheckMaxEnergy(maxEnergies));

    //I guess programming C++ like this in the year 2022 isn't a good idea to do publicly
    delete buffer;
    delete reader;
    delete testPluginProcessor;
}

The second test for testing the wet parameter is basically a continuation of this. Get your audio buffer and run it through the audio processing function with varying levels of wet-parameter. Ensure that the higher the wet parameter is, the higher the filtering effect. This means there’s again less low-end energy, and less energy in general. Or if you want to do a super simple test as I did, just check that with a wet value of 0 signal doesn’t change, and with the max wet parameter value of 1 it does.

TEST_CASE("Wet Parameter", "[parameters]")
{
    testPluginProcessor = new AudioPluginAudioProcessor();
    //Helper to generate a buffer filled with noise
    juce::AudioBuffer<float> *buffer = Helpers::generateAudioSampleBuffer();
    juce::AudioBuffer<float> originalBuffer(*buffer);    

    juce::MidiBuffer midiBuffer;

    testPluginProcessor->prepareToPlay(44100, 4096);
    testPluginProcessor->processBlock(*buffer, midiBuffer);

    //Check that initial value of wet is not zero, i.e. filtering happens
    CHECK_THAT(*buffer,
               !AudioBuffersMatch(originalBuffer));

    delete buffer;

    buffer = Helpers::generateAudioSampleBuffer();

    //Get and set parameter
    auto *parameters = testPluginProcessor->getParameters();
    juce::RangedAudioParameter* pParam = parameters->getParameter ( "WET"  );
    pParam->setValueNotifyingHost( 0.0f );

    for (int ch = 0; ch < buffer->getNumChannels(); ++ch)
        originalBuffer.copyFrom (ch, 0, *buffer, ch, 0, buffer->getNumSamples());
    testPluginProcessor->processBlock(*buffer, midiBuffer);

    //Check that filter now doesnt affect the audio signal
    CHECK_THAT(*buffer,
               AudioBuffersMatch(originalBuffer));

    delete buffer;

    buffer = Helpers::generateAudioSampleBuffer();
    pParam->setValueNotifyingHost( 1.0f );


    for (int ch = 0; ch < buffer->getNumChannels(); ++ch)
        originalBuffer.copyFrom (ch, 0, *buffer, ch, 0, buffer->getNumSamples());
    testPluginProcessor->processBlock(*buffer, midiBuffer);

    //Finally, check that with max wet the signal is again affected
    CHECK_THAT(*buffer,
               !AudioBuffersMatch(originalBuffer));

    delete buffer;
    delete testPluginProcessor;
}

I wish I could argue which approach is better here, but I think it’s quite apparent whether or not it’s better to do proper or half-assed testing. I’ll leave it as an exercise for the reader to figure out how to do the proper way of testing.

Hey Arnold Nicksplat GIF - Find & Share on GIPHY

Images

If you’re given the following sample values, can you figure out what the audio signal sounds like?

[0.25, -0.59, 0.96, 0.21, -0.22, -0.36, -0.45, -0.14, 0.39, 0.35, 0.87, 0.64, -0.32, 0.12, -0.86, -0.67], repeated ad nauseam

I don’t know either. In general, humans tend to absorb information via visual means. A bunch of decimal numbers isn’t the most intuitive way of understanding something unless you’re one of the wizards I talked about earlier. But what if I showed you this:

Seems like it’s a signal of sorts.

As you can see (pun intended) visual information is a lot easier to digest. It’s not the most impressive of graphs really, nor the easiest to read, but there’s a good explanation for that: I wrote the code for it. At least the code allows easy drawing of images as a part of unit tests to see what’s going on with the inputs and outputs, as opposed to printing out audio buffer contents and FFT results and hoping that staring at the numbers absorbs them into the brain. You can find the image drawing function from Tests/ImageProcessing.h, here it is in action:

juce::AudioBuffer<float> *buffer = Helpers::generateBigAudioSampleBuffer();
ImageProcessing::drawAudioBufferImage(buffer, "NoiseBuffer");

Just give it a buffer and a filename without the .png extension and it’ll handle the rest. So, for example to make sure you’ve hooked things up correctly in your testing set-up, you can call the drawing function before and after doing some processing to the audio signal to see if the changes are at least somewhat sensible.

As you can guess, this was one of the earlier attempts of getting things working.

Benchmarking

Imagine you’re in an all-you-can-eat buffet. You can eat whatever as much as you want and the staff won’t kick you out for a few hours. Usually eating a lot feels like a good idea. However, after you’ve stopped eating, the reality of the situation settles in and you realize that it wasn’t a good idea. You feel sick.

The same applies to coding. It’s fun to code without limits. More features, more, MORE, you’ll think to yourself. However, after coding for a while you’ll realize that this wasn’t a good idea either. The program starts to become slow and sluggish. You’ve introduced latency to your code, and that is the second worst thing that can be done. The only thing worse is a 127 dBFS pop that was caused by careless buffer handling when you were starting out with audio signal processing.

To keep things in check, Catch2 has some simple benchmarking macros. There are a few example usages of those in FilterUnitTest-repo. It’s quite basic C++, meaning that it took me about three compilation attempts and one illegal memory access to get the syntax right. After some trials and a lot of errors, I ended up with something like this:

TEST_CASE("Processblock Benchmark", "[benchmarking]")
{
    testPluginProcessor = new AudioPluginAudioProcessor();
    juce::AudioBuffer<float> *buffer = Helpers::generateAudioSampleBuffer();

    juce::MidiBuffer midiBuffer;

    testPluginProcessor->prepareToPlay(44100, 4096);

    //Example of an advanced benchmark with varying random input
    BENCHMARK_ADVANCED("Plugin Processor Processblock ADVANCED")(Catch::Benchmark::Chronometer meter) {
        juce::Array<juce::AudioBuffer<float>> v;
        for (int j = 0; j < meter.runs(); j++) {
            v.add(*Helpers::generateAudioSampleBuffer());
        }
        meter.measure([&v, midiBuffer] (int i) mutable { return testPluginProcessor->processBlock(v.getReference(i), midiBuffer); });
    };

    delete buffer;
    delete testPluginProcessor;
}
Im Smart Parks And Recreation GIF - Find & Share on GIPHY
Attempting to read C++ documentation on lambdas and pretending to understand what’s going on.

Closing words

I hope you found this blog post useful. As mentioned, I was struggling to get started with JUCE and unit testing, so hopefully this writing helps you to think about how to test your application, assists in integrating a unit testing framework, and contains some useful and practical resources to get you started with testing. Also, I want to say that this type of FFT matching isn’t the only solution for unit-testing audio applications. You can for example remove the random elements from your tests, use pre-determined random seeds, or mock some parts of your code if needed. I’ve just found the FFT approach really intuitive and flexible after I got my head wrapped around it. Thanks for reading!

Comments are welcome but due to a quite hefty amount of bot spam, the comments will go through moderation so it may take some time to see your prose in the comments section. As long as you’re not trying to sell Viagra or women’s haircuts to me it’ll eventually appear there. If you happen to be going to ADC 2022 feel free to let me know!

GIF by South Park  - Find & Share on GIPHY
I don’t know the context of this gif, but knowing it’s South Park it must be something very nice and wholesome.

Yocto hardening: Non-root users, sudo configuration & disabling root

Find all of the Yocto hardening texts from here!

Cybersecurity. The never-ending race between you trying to secure your precious IoT device and some propeller head who’s finding the wildest exploits and destroying your system just in the time between breakfast and lunch (although true hackers work during night, not in the mornings). Or perhaps you forgot to update your WordPress plugins, and now your fantastic development blog is hacked. Or perhaps you’ve just been postponing that security update on your Android phone for six months. We all have some experience with cybersecurity.

Usually, there are some simple things to do to prevent the worst catastrophes. Update software, make sure the run-time environment is sane & secure, and encrypt the secrets to mention a few. In this new blog series, I try to explain how to make your Yocto systems more secure by adding some common-sense security to the system. All of these ideas are familiar from Linux desktop & server environments, but I’m going to show how to apply them to Yocto builds. So, unfortunately, I’m not actually going to cover how to keep a fantastic WordPress site secure.

Well okay, no need to be upset. Just press the red “update plugins” button every now and then. And if you’re self hosting, I wish luck to your distro upgrades.

As usual with any security advice, it’s good to be slightly skeptical when reading it. This means that some rando writing blog posts on their blog may not be a 100% trustworthy or up-to-date source on everything. Also, those randos will take no responsibility for the results of their advice. However, their advice may provide useful guidance and general tips that you can apply to your system’s security hardening plan. You do have a security plan, right?

So, the first topic of this Yocto hardening exercise is users. As you may have heard, running and doing everything as the root user is generally considered a Bad Idea. Having services running under the root user unnecessarily can result in root user executing arbitrary code using the most imaginative methods. Or, if the only user available for logging in is the root user you’re not only giving away root permissions to malicious users, but to incompetent users as well.

This text will assume intermediate knowledge of Yocto, meaning that I won’t explain every step in depth along the way. Hopefully, you know what an append-file and local.conf are. The code and config snippets were originally written for Yocto Kirkstone checked out “at some point after 4.0.3 release”, and later re-tested with version 4.0.15.

Before we get started with the actual hardening, there’s one preliminary task to do. Ensure that you don’t have DEBUG_TWEAKS in either IMAGE_FEATURES or EXTRA_IMAGE_FEATURES. Not only is it unsafe as it allows root login without a password, and a bunch of other debug stuff, but it also makes some examples shown here behave in unexpected ways.

Creating non-root users

Here is a code snippet that creates a service user for the system. This user can be logged in with, and they have sudo capabilities (if sudo-package is installed). Insert this code either into a image recipe or a configuration file:

# If adding to configuration file, use INHERIT += "extrausers" instead.
inherit extrausers

IMAGE_INSTALL:append = " sudo"

# This password is generated with `openssl passwd -6 password`, 
# where -6 stands for SHA-512 hashing alorithgm
# The resulting string is in format $<ALGORITHM_ID>$<SALT>$<PASSWORD_HASH>,
# the dollar signs have been escaped
# This'll allow user to login with the least secure password there is, "password" (without quotes)
PASSWD = "\$6\$vRcGS0O8nEeug1zJ\$YnRLFm/w1y/JtgGOQRTfm57c1.QVSZfbJEHzzLUAFmwcf6N72tDQ7xlsmhEF.3JdVL9iz75DVnmmtxVnNIFvp0"

# This creates a user with name serviceuser and UID 1200. 
# The password is stored in the aforementioned PASSWD variable
# and home-folder is /home/serviceuser, and the login-shell is set as sh.
# Finally, this user is added to the sudo-group.
EXTRA_USERS_PARAMS:append = "\
    useradd -u 1200 -d /home/serviceuser -s /bin/sh -p '${PASSWD}' serviceuser; \
    usermod -a -G sudo serviceuser; \
    "

The extrausers class allows creating additional users to the image. This is done simply by defining the desired commands to create & configure users in the EXTRA_USERS_PARAMS variable. As a side note, it’s good to have static UIDs for the users as this will make the builds more reproducible.

In a perfect world, every custom service you create for your system would run under a non-root user. When writing a recipe that creates a daemon or other kind of service, you can use the snippet below to add a new user in a non-image recipe:

inherit useradd
USERADD_PACKAGES = "${PN}"
GROUPADD_PARAM:${PN} = "--system systemuser"
# This creates a non-root user that cannot be logged in as
USERADD_PARAM:${PN} = "--system -s /sbin/nologin -g systemuser systemuser"

As you can see, adding a new user in a non-image recipe is slightly different than in the image recipe. This time we’re only giving parameters to useradd and groupadd commands. After adding this, you should be able to start your daemon as the systemuser in a startup script. If you need root permissions for some functionality in your service but don’t want the service to be run as root, I’d recommend reading into capabilities. It’s worth noting that when adding users in a service recipe like this, the additions are done per-package basis, not per-recipe basis. This means that you can create and add new users in a quite flexible manner.

The flexibility tends to come with some complications though…

Editing sudoers configuration

By default, when adding the sudo-package to the build it doesn’t do much. It doesn’t actually allow anything special to be done by the users, even if they are in the sudoers group. That’s why we need to edit the sudo-configuration. There are two ways of doing this, either by editing the default sudoers file or by adding drop-in sudoers.d configurations. First, editing the sudoers file (which is the worse method in my opinion).

There are two ways of doing this, and I’m not 100% sure which one is less bad, so I’ll leave it to you to decide. The first option is adding a ROOTFS_POSTPROCESS_COMMAND to the image recipe:

enable_sudo_group() {
    # This magic looking sed will uncomment the following line from sudoers:
    #   %sudo   ALL=(ALL:ALL) ALL
    sed -i 's/^#\s*\(%sudo\s*ALL=(ALL:ALL)\s*ALL\)/\1/'  ${IMAGE_ROOTFS}/etc/sudoers
}

ROOTFS_POSTPROCESS_COMMAND += "enable_sudo_group;"

The second option is creating a bbappend file for the sudo recipe, and adding something like this there:

do_install:append() {
    # Effectively the same magic sed command
    sed -i 's/^#\s*\(%sudo\s*ALL=(ALL:ALL)\s*ALL\)/\1/'  ${D}/${sysconfdir}/sudoers
}

Both do the same thing, and both are hacky. The correct choice really depends on if you want to edit the sudoers file in a different way for each image, or if you want the sudo configuration to depend on something else. You can also of course supply your own sudoers file instead of sed-editing it, and that can be a lot more maintainable way of doing this if you have more than just one change.

However, a more flexible way of configuring sudo is with the sudoers.d drop-in files. It also involves a lot fewer cryptic sed commands. Let’s assume that you have a recipe that creates a new user lsuser (not to be confused with a loser), and you want that user to have sudo rights just for ls-command (considering rights given to this user, they may actually be a loser). For this purpose you need to create a bbappend for sudo, and add something like this snippet to create a drop-in configuration:

do_install:append() {
    echo "lsuser ALL= /bin/ls " > ${D}${sysconfdir}/sudoers.d/lsuser
}

FILES_${PN} += " ${sysconfdir}/sudoers.d/lsuser"

Again, if your drop-in configuration is more complex than one line, you can provide it as a configuration file through SRC_URI and install that in your image instead of echoing configuration lines. I recommend reading sudo-configuration documentation for more info on how to precisely set the permissions because let’s be honest, these two examples aren’t the shining examples of sudo configuration.

The downside of this approach is that you need to add the drop-in configuration in a sudo bbappend because /etc/sudoers.d will conflict when creating the root filesystem if you add the drop-in in another recipe. This means that when you’re creating a user in a non-sudo recipe and add the drop-in conf in the sudo recipe you’ll have the user creation & configuration handled in two different places, which is perfect for forgetting to maintain things.

Disabling root-login

Disabling root login is useful for a multitude of reasons.

  1. You usually want to make sure that no user has the full command set available
  2. You also want that the users’ privileged actions to end up logged in auth.log when they use sudo
  3. In general, it’s “a bit of a risk” to allow users to log in as root

Disabling the root user can be achieved in multiple ways, and I’m going to cover three of them here: fully locking the root account, removing the login shell, and disabling the SSH login. In theory, locking the root account should be enough, assuming non-root users cannot unlock it, but I’ll present all the methods as they may be useful for different situations.

First, disabling the root account. This is a method I found from this StackOverflow answer, kudos there. The simple snippet below pasted in the image recipe should lock and expire the root account:

inherit extrausers
EXTRA_USERS_PARAMS:append = " usermod -L -e 1 root; "

Then, removing the login shell. Or more like setting the login shell to nologin. This isn’t strictly required after locking the root account but can be achieved by creating a bbappend shown below to base-passwd recipe:

do_install:append() {
    # What this magic sed does:
    # "In the line beginning with root
    # replace /bin/sh with /sbin/nologin
    # in file passwd.master"
    # (passwd file gets generated from this template during install)
    sed -i '/^root/ s/\/bin\/sh/\/sbin\/nologin/' ${D}${datadir}/base-passwd/passwd.master
    # sry for the picket fence

    # I think it's peak sed that I need a four line
    # comment to explain a one-liner
}

And finally, disabling the SSH login. You should always disable the root login over remote networks. If you need, you can just disable the SSH login for root and still allow other service users to log in as root. This is still a better option than doing nothing, as this will prevent logging in as root remotely and attackers would need to know two passwords to gain root access (one for the service user, and the other one for the root user).

Disabling the root login varies a bit between the SSH servers. Dropbear disables root login by default with -w parameter in DROPBEAR_EXTRA_ARGS variable located in the /etc/defaults/dropbear file (note that if you have debug-tweaks enabled, the file actually contains -B, allowing root-login). You can overwrite the file with your own dropbear.default file during the build if you want to add more options.

Similarly, OpenSSH-server disables root logins with passwords if debug-tweaks is removed from the IMAGE_FEATURES (and allows them if it’s present). This is achieved by sed-editing SSH configuration files. If you want to see how this is exactly done, check ssh_allow_root_login function in meta/classes/rootfs-postcommands.bbclass (part of poky).

However, it’s worth noting that this default behaviour doesn’t prevent public key authentication. If you want to disable that as well, you can add this function as a rootfs post-process task to the image recipe. And if needed, it could of course be modified to work in a bbappend as well.

# The function here searches sshd_config and sshd_config_readonly files for a
# commented line containing PermitRootLogin, and replaces it with "PermitRootLogin
# no" to prevent any sort of root login.

disable_rootlogin() {
    for config in sshd_config sshd_config_readonly; do
        if [ -e ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config ]; then
            sed -i 's/^[#[:space:]]*PermitRootLogin.*/PermitRootLogin no/' ${IMAGE_ROOTFS}${sysconfdir}/ssh/$config
        fi
    done
}

ROOTFS_POSTPROCESS_COMMAND += "disable_rootlogin;"

The thing to note about disabling root login is that too lenient sudo- or filesystem-permissions for non-root users can make the whole thing useless. For example, if a service user has sudo access to passwd command they can easily unlock the root account. If a service user has write permissions to /etc they can set the root’s login shell and edit SSH configuration. And finally, disabling root login like shown in this text does nothing to prevent shell escapes made possible by incorrectly set sudo-permissions.

Might as well roll out the red carpet

That’s it for this time! Hopefully, these tips helped you to achieve the minimum security in your Yocto images. Do these code snippets make your system unhackable? Not even close. Do they make it slightly more inconvenient? I hope so (but no guarantees for that either). But at least now you can show your Yocto image to an independent professional security consultant and say that you tried before they tear the whole system to pieces.

You can find the second part of the Yocto hardening series here. It’s about fixing CVEs.

How to build Fritzing for Windows

Fritzing is an open-source tool used to design and draw electrical wiring circuit diagrams, like this:

It can also be used to draw schematics and PCB diagrams, so it’s a really handy program indeed.

Why am I covering an open-source tool’s build process here? Because it was a surprisingly difficult process. You can buy the prebuilt version for 8€ and support the developer, but if you’re an adventurous soul (or a cheapskate) you of course want to build it yourself. I’m going to say that it took me almost a full afternoon to figure this build out, so the hourly rate wasn’t really that good, and the software is so useful that I’ll most likely be paying that 8€ anyway.

Here you can find the official Fritzing build instructions wiki. It’s a bit confusing at some points, but I’ll mostly follow the steps there, and mention it when I don’t.

This guide assumes you have a Windows machine with git, Visual Studio 2019 and a “sufficiently new” CMake installed.

Lets Go Sport GIF by ALL ELITE WRESTLING - Find & Share on GIPHY

Installing (correct) Qt version

If you have not installed Qt already, this step is quite straightforward. Download the Qt installer from here, agree with their open source policy and install the 5.15.2 version of Qt.

ERRATA: I did some more digging after publishing this text and realized that there actually is a simpler way to install multiple Qt versions than described in the next chapters. In the root-folder of the Qt installation there is tool named MaintenanceTool.exe. This tool can be used to install, remove and update versions of Qt. So use it instead of following these instructions. However, I’ll leave the chapters here as a proof that not everybody on the Internet is smart.

However, if you’ve already installed a different version of Qt this step was a bit tricky to complete (or at least I tried to make it difficult for me). Qt Creator (=Qt’s IDE) itself doesn’t allow downloading different versions of Qt and the official documentation only says that if you want to add a new version of Qt to Qt Creator you need to locate a qmake file. But where is this qmake file?

In the end, I couldn’t quite find a satisfactory answer to this. What I did was that I used the same installer as with the clean build, and installed the desired version of Qt in a different location. However, the installer forces you to install Qt Creator again, so it pollutes your system a bit.

If you’re not seeing the version you’d like to see, try checking Archive & LTS boxes from the menu on the right

After the version is installed, it can be linked in Qt Creator using this guide. Basically, just navigate to Edit->Preferences->Kits->Qt Versions->Add in Qt Creator and add the mythical qmake executable there (the executable is in a path something along these lines: Qt\5.15.2\msvc2019_64\bin\qmake.exe)

There is a small possibility that I’m just dumb and there is an “Install Qt version” button somewhere in the depths of the Qt Creator and I just couldn’t find it. But this approach at least works, even though it installs a bit of extra to the system. I also found the source packages for different versions, but didn’t feel like compiling Qt just to get Fritzing up and running.

Downloading the sources

There are a few repos that need to be pulled for the build. The first one is obviously Fritzing app itself. Besides that boost version 1.x.0 and libgit2 version 0.28.x are needed for building. For running the application you’ll also want Fritzing parts repo to get some actual components for your diagrams. All these repositories should be placed side-by-side so that you’ll end up with something like this:

Versions I used were:

  • fritzing-app: f0af53a9077f7cdecef31d231b85d8307de415d4
  • fritzing-parts: 4713511c894cb2894eae505b9307c6555afcc32c
  • libgit2: v0.28.5
  • boost: 1.79.0

Compiling dependencies

Next step is to compile libgit2. This is where I hit a big problem. Fritzing Wiki instructs to build with -DBUILD_SHARED_LIBS=OFF. However, with this flag, the build actually doesn’t output the .dll file required later on in the build. So the commands I used to build libgit2 actually were:

cd libgit2
mkdir build64
cd build64
#Note that BUILD_SHARED_LIBS is ON
cmake .. -G "Visual Studio 16 2019" -A x64 -DBUILD_SHARED_LIBS=ON -DBUILD_CLAR=OFF
cmake --build . --config Release

Boost and Fritzing parts shouldn’t require any compilation at this stage.

Compiling Fritzing

Next, we’ll get to open the Qt Creator, and open the phoenix.pro file located in the root of the fritzing-app folder. Configure it for the 5.15.2 version of the Qt, and as the build wiki instructs, add the following to the Projects->Run->Command Line Arguments in Qt Creator:

-f "/path/to/fritzing-app/" -parts "/path/to/fritzing-parts/" -db "/path/to/fritzing-parts/parts.db"

After this is done there’s still one more hurdle to overcome. Building now seems to result in this error:

error: dependent 'F:\Esa\Documents\Fritzing\debug64\ui_fabuploaddialog.h' does not exist.
Detective Looking GIF by Sherlock Holmes Games - Find & Share on GIPHY

To fix this issue, we’ll need to navigate to the build folder that gets generated alongside the fritzing-app folder, and is named like build-phoenix-* (rest of it depends a bit on your build configuration). There we need to use Qt’s jom to build compiler_uic_make_all target:

P:\ath_to_QT\Tools\QtCreator\bin\jom\jom.exe -f Makefile.Debug compiler_uic_make_all

This will generate the missing headers to the debug64-folder that’s also alongside the fritzing-app and build folders. After this, the build should be as simple as clicking the green arrow in Qt Creator. If you get a boost-include error, make sure you have the boost folder directly under the boost_1_79_0 folder, and that you don’t have a structure boost_1_79_0\boost_1_79_0\boost as I did. This wrong structure resulted in the following error:

F:\Esa\Documents\Fritzing\fritzing-app\src\svg\groundplanegenerator.cpp:40: error: C1083: Cannot open include file: 'boost/math/special_functions/relative_difference.hpp': No such file or directory

Once the build completes, the Fritzing will start. Because there is db-argument given in the command line, the actual program won’t start. Instead, Fritzing generates the parts database and closes itself after the process finishes.

Running Fritzing

Congratulations, you should now have built & prepared Fritzing! After the initial build & database generation remove the -db argument from the run arguments so that Fritzing starts properly with Qt Creator. This type of launch was a good enough solution for me, and I didn’t feel like going through the hassle of creating an actual executable for Fritzing. I think I can pay 8€ for that pleasure.

Stay tuned for texts that include more Fritzing diagrams!

Yocto and WSL, part 3: WSL vs. VMWare

Read the previous parts of “Yocto and WSL” series:
Yocto? On WSL2? Easier than you think!
Yocto and WSL, part 2: The Graphic Boogaloo

The ultimate showdown of the ultimate destiny!

All the good things are trilogies. Star Wars original trilogy, Nolan’s Batman trilogy, and The Hobbit trilogy. And now, Yocto & WSL trilogy. As per usual with the trilogies, the last part may be a bit of a letdown for the hardcore fans. This time we’re not doing really anything technical or exciting. Instead, we’re comparing some numbers while trying to decide if this whole exercise was worth it or not.

To judge the usefulness of WSL in the Yocto build context were doing a comparison between WSL machine and a “regular” virtual machine to see which one is actually better for building Yocto (if you’re forced to use Windows). Better means faster in this case. For this purpose, I created a VMWare virtual machine that matched my WSL machine’s specs: 4 cores, 8GB, and enough disk space to accommodate the build area. For the operating system I chose Ubuntu Server 20.04.4, and as a reference build target I used the simplest core-image-minimal recipe with both machines.

But wait, there’s more!

As a bit of an extra, I’ll be adding a comparison to a cloud build server as well for the same price! For this purpose, I prepared a similar 4-core 8GB build server on Hetzner. This is a virtual machine as well, but it’s interesting to have more things to compare to. Especially because the cloud instances provide an option to scale up in case they aren’t powerful enough. Also, like WSL and VMWare, they can be used as an alternative for a Linux build machine. Without further ado, let’s get into crunching numbers.

First, to actually make a worthwhile comparison between different build times, I’m going to separate the process of downloading the sources from the actual build itself. I did three runs of downloads because “third time’s the charm”. VMWare was using a NAT network connection here. The separation was done because the time it takes to download the sources depends on the load on the source code servers, how much my neighbor happens to be using the internet at the given moment, and the alignment of the stars. Or what do you think of these results:

MeanMedianRun 1Run 2Run 3
WSL44m 4s52m 1s52m 1s59m 29s20m 42s
VMWare46m 20s45m 5s37m 28s56m 29s45m 5s
Cloud build35m 18s33m 40s33m 40s41m 49s30m 26s

There were two packages that became bottlenecks in these tests: cross-localedef-native and linux-yocto. Especially the former. Other packages were downloaded in a few minutes but cross-localedef-native took almost always at least half an hour to download. In the WSL run #3 cross-localedef-native took only about 15 minutes to download, and instantly the whole download process was a lot quicker. In general, I would still boldly claim that Hetzner build is potentially the fastest on average. Or at least it would make sense, as the Internet pipes in their data centers are most likely wider than mine at my home.

That good ol’ YouTube player brings a nostalgic tear to my eye

Then, the actual build times. For these, I actually did four runs, because “third time’s the charm and one more for a good measure” (this blog text isn’t quite as scientific as I make it out to be). As mentioned earlier, I built the core-image-minimal for all these runs. These builds were done without downloading the sources in an attempt to get the build times to be a bit more stable. Here’s the table:

MeanMedianRun 1Run 2Run 3Run 4
WSL109m 15s106m 48s106m 11s121m 26s107m 26s103m 37s
VMWare125m 51s125m 51s126m 33s126m 32s125m 10s125m 9s
Cloud build106m 30s105m 35s111m 10s105m 29s105m 3s105m 40s

As perhaps expected, the virtual machine was constantly the slowest option. What however is slightly surprising is that the WSL was almost as fast as the cloud build server with the same resources. Or actually, with better resources as the build server had an SSD disk while WSL was running on an old HDD I bought from a friend of mine for 30 euros some years ago.

But yeah, in the light of this evidence I’d say that if you’ve got some dollars to spare, getting a cloud build server is the best option for building Yocto (if you can’t get a native Linux build machine that is). It’s not only the fastest option, but it’s also scalable, so when you’re building something else than the “trivial” core-image-minimal you can easily increase your build performance. I’ll admit that the time difference isn’t big in this comparison, but once you build more complex images the differences in the build times will become more apparent.

The cloud server used in this comparison costs about 15€ a month. If you want to give the cloud build a go, you can use this super-neat affiliate link to sign up and get 20€ worth of free credits. And maybe give me 10€ in the process if you happen to spend some actual money on the platform.

Shooting Easy Money GIF by iHeartRadio San Francisco - Find & Share on GIPHY
How well I handle my money every time the paycheck comes in

However, if you don’t have enough dollars for about two avocado toasts or three lattes every month, WSL is a better alternative than VMWare for building Yocto. Virtualbox seemed to be even worse than VMWare as it didn’t even want to build anything for me. It just spat out constant disk I/O errors that I simply couldn’t get fixed. Different caching options, physical disk drives, or even guest operating systems didn’t fix that.

But this is not all! You thought that you don’t have to read any more of these tables, didn’t you? I did some generic performance tests to get a bit better understanding of the different build machines, so I’ll add their results here as a bit of a bonus. The first one is the network speed test. I used the Ookla’s Speedtest command line tool for these. The values are averages of “about a dozen” runs.

LatencyDownloadUpload
WSL3.76ms104.22 Mb/s59.85 Mb/s
VMWare3.94ms104.13 Mb/s59.88 Mb/s
Cloud build0ms9266 Mb/s9220 Mb/s

There’s a bit of a difference in the latency between WSL and virtual machine, but the differences in the download and upload speeds are marginal. Cloud build is something entirely different than the other two, as expected.

Next is the hard drive speed, both reading and writing. For this, I used dd command-line tool. For the write test I used this command:

dd if=/dev/zero of=test bs=1G count=4 oflag=dsync

For the read test I used this command:

dd if=./test of=/dev/null

For the un-cached timings I cleared the caches with this command between the runs:

echo 3 | sudo tee /proc/sys/vm/drop_caches

For the cached timings I didn’t. Here are the results:

WriteWrite (cached)ReadRead (cached)
WSL91.6 MB/s93.7 MB/s91.9 MB/s628 MB/s
VMWare50.6 MB/s107.5 MB/s338 MB/s559 MB/s
Cloud build1.2 GB/s1.3 GB/s603 MB/s1.2 GB/s

This is where the cloud machine’s SSD seems to shine, even if it didn’t do so well when doing the actual build work. The differences between WSL and VMWare on the other hand are a bit varied. What these results seem to suggest is that the uncached write performance is quite important for Yocto build performance. However, results also seem to be showing that the mass storage performance isn’t quite as important for the Yocto builds.

Finally, the CPU speed test. This was done with sysbench, using the command sysbench cpu --threads=4 run. Here I compared the CPU events per second:

CPU events per second
WSL5359.31
VMWare5284.58
Cloud build14582.81

All things considered, it’s slightly surprising how small the cloud build’s margin of victory when comparing the build times actually was. Especially considering how much more performant the CPU is according to the raw numbers. The most important thing actually seemed to be the number of cores, because a three-core cloud machine would already be slower for Yocto builds than a four-core VMWare virtual machine. So please remember this when you’re setting up your Hetzner cloud instance after you’ve registered through this link (I promise this affiliate pushing won’t become a thing).

That’s all for now. This is also most likely the last text about Yocto builds with WSL for time being. Next time it’s either something different that’s not related to Yocto at all, or then it’s something similar that is related to Yocto but not WSL.

tl;dr: WSL is faster than a virtual machine, but I recommend using cloud servers if possible

Writer of this blog post is wondering how he ever managed to finish his thesis

Yocto and WSL, part 2: The Graphic Boogaloo

Read the other parts of “Yocto and WSL” series:
Yocto? On WSL2? Easier than you think!
Yocto and WSL, part 3: WSL vs. VMWare

So, in the previous part, we got the Yocto built with WSL and tested it out with a text-based terminal interface. This time we’re going to improve things a bit by getting the graphics working! Like every engineer always says, “it’s really nice to have this fancy graphical interface with buttons that have rounded corners instead of an efficient, text-based interface”.

(After publishing this text it was brought to my attention that WSL on Windows 11 actually supports GUI applications natively. I’m not sure how this works with QEMU but I’d recommend trying that out before going through all the hassle mentioned in this text. If you’re still stuck on Windows 10 like me these steps are at least currently applicable)

The process of getting graphics up and running in the WSL consists of three steps:

  1. Make a hole into the Windows firewall (starts promisingly) so that WSL can communicate with an X server
  2. Launch an X server with the access control disabled (this sounds great as well)
  3. Set the environment variables in WSL

The instructions I used to achieve this can be found in this great blog post. However, for the sake of documentation and out of the fear of broken hyperlinks, I’ll go through the steps here as well.

The hole in the firewall needs to be made because the networking in WSL2 is implemented in a way where the traffic is considered to be coming from an external source (as opposed to WSL1). The firewall rule can be set using the following steps:

  1. Search “Windows Defender Firewall with Advanced Security” from the Start Menu
  2. Click “Inbound Rules” on the left-hand side menu, and then click “New Rule…” on the right-hand side menu
  3. Select rule type to be “Port”, set the port to 6000 and give it a good name. The rest of the settings should be good by default (TCP protocol & allowing all kinds of access)
  4. Right click on the newly created rule in the rule list, select Properties and from there select Scope-tab. The limited scope prevents unwanted entities from exploiting the poor firewall’s new rule. Check the image below for Scope-settings:
Basically, we’re limiting the connections just to private IP-addresses

And just like that, the firewall should allow the traffic from WSL2 to the X server. The X server that we haven’t even installed yet. I have been using VcXsrv X server, and the next step is downloading and installing it. After VcXsrv has been installed we can continue.

As briefly mentioned earlier, the access control needs to be disabled when launching the server to make things go smoothly. Just use -ac flag when launching the X server from the command line to disable access control. The full command to launch VcXsrv from a Windows terminal is:

vcxsrv.exe :0 -multiwindow -wgl -ac -silent-dup-error 

The first parameter sets the display number to zero, the second one allows multiple windows, the third one doesn’t seem to be necessary after a quick test but I’ll still leave it here because it was in the original post, the fourth parameter is the important one that disables the access control and the last one prevents errors if launching multiple VcXsrv-servers. After getting the server running we’re almost ready to get into the exciting world of graphic content. Open up a WSL session, and export the following two variables (in WSL terminal):

export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0
export LIBGL_ALWAYS_INDIRECT=1

The first variable sets the display we want to use (basically WSL nameserver address and display 0) and the second variable should prevent skipping X server when calling draw commands. I’m not 100% what that means, but it sounds important so it’s better to set it.

And now, finally, we’re ready to check out the graphical interface! If you’ve closed your WSL session in the past two months it took me to write this second blog post, first head to your poky-folder and run source oe-init-build-env again. Then you should be able to run:

runqemu
#look, no no-graphic option this time, wowzers!
Wait, I was promised graphics, what is this? I want to click icons and drag-n-drop stuff around.

Well, the thing is that the core-image-minimal image we built in the previous blog post doesn’t have a graphical interface built. While technically this new window is a graphical, emulated window, it’s not a graphical user interface. To get an image with these capabilities, we need to build core-image-sato instead (sorry).

So yeah, navigate to your build-folder and run bitbake core-image-sato. Even though the build reuses some of the artifacts from the previous build this will take some time, so be prepared to find something else to do for a while. After the build has been completed, you should be able to run runqemu again, and see something like this:

Awww yisssss

The apparent problem is the quite significant latency introduced by QEMU, and running QEMU on top of the WSL doesn’t do the situation any favors. Technically you can run the image also on a Windows native QEMU, but I didn’t see any noticeable difference in the GUI performance. My method of analysis was not the most scientific one though: I just pressed a button and tried to internally analyze how frustrated I got before something happened in the GUI. With both WSL QEMU and Windows native QEMU I got “quite frustrated”, so I’d call it a tie.

I didn’t do any actual benchmarking on if there’s a difference in the computing performance, perhaps in some upcoming blog post I could write a bit about that. It seems like the Windows native QEMU lacks some options (I couldn’t get the networking working (more like notworking amirite)), so it may be better to stick to WSL QEMU.

That’s all for this time. Hopefully, you learned something from this post, I at least did. Once again big shout-out to Skeptric-blog, plenty of other interesting texts in there as well.

Part 3 is now available here!

PS: I disabled the comments on this blog. It seems like the only ones writing those were freaky spambots. If you’ve got something to ask or say you can reach me on LinkedIn.

Yocto? On WSL2? Easier than you think!

Read the other parts of “Yocto and WSL” series:
Yocto and WSL, part 2: The Graphic Boogaloo
Yocto and WSL, part 3: WSL vs. VMWare

(Sorry for the clickbaity title, but the WordPress Headline Analyzer gave it the best score I’ve ever seen so I can’t change it)

Let’s begin this post with the basics. What are these words and acronyms in the title? Some nerd stuff? Well, yes. Yocto is a project/tool that can be used to generate a cross-compilation toolchain and a firmware image for an embedded system. This sort of work includes never-ending woes with bootloaders, Linux, and much, MUCH more. Windows Subsystem for Linux (= WSL) is “a compatibility layer for running Linux inside Windows”. It’s not really a virtual machine (at least according to Microsoft) or a container, despite running a virtual machine and virtualization being its core (or so I understood). What is it then exactly?

Well, let’s call it a virtualization layer and settle for a fact that it’s a way of running a Linux-like environment inside Windows with relative ease. And as you can guess from the Yocto project’s description, we’re going to be needing plenty of that Linux.

First things first: setting up WSL2. There are plenty of tutorials and official documentation on this topic, so I’m not going too much in-depth with this one (although I’ll go through the issues I faced along the way). I recommend checking at least this official documentation out (There actually seems to be an easier way of installing WSL these days, but I haven’t tried it). In a nutshell, the steps that should be done are:

  • Enable WSL feature in Windows (and reboot)
  • Install WSL2 kernel update (most likely reboot after this one won’t hurt either)
  • Set WSL2 as the default WSL version (reboot isn’t really needed here, but it’s always fun to stare at that picture of a cave that’s the log-in screen background and think how you could be somewhere else. Somewhere warmer where the sun shines)
  • Install the desired distribution.

It’s Ubuntu that you desire. Or at least I recommend it because it’s the most widely supported Linux distro [citation needed] and the Yocto guide explains the steps a bit more specifically for Ubuntu users. Other distros still work, or at least should work. Yocto claims to support “recent releases of Fedora, openSUSE, CentOS, Debian, or Ubuntu”, whatever that means.

However, this installation phase was where I started to face problems. When installing Ubuntu I got the following error:

Installing, this may take a few minutes…
WslRegisterDistribution failed with error: 0x80004005
Error: 0x80004005 Unspecified error

Despite the ominous-sounding “Unspecified error” this was caused by the simple fact that I didn’t have enough disk space on the disk that contained AppData-folder (I guess this defaults to C:). There seem to be some other explanations for this error on the Internet, but I recommend freeing space if you’ve filled your C: with junk (as I assume every Windows user has done). 2GB of free space wasn’t enough, 7GB seemed to do the trick.

After doing this and retrying, I got the following error instead:

WslRegisterDistribution failed with error: 0xc03a001a
Error: 0xc03a001a The requested operation could not be completed due to a virtual disk system limitation. Virtual hard disk files must be uncompressed and unencrypted and must not be sparse.

This one was a bit trickier, but I found the fix here. Basically, you need to navigate to %LOCALAPPDATA%/packages/, check the (advanced) properties of CanonicalGroupLimited.UbuntuonWindows-folder and ensure that it isn’t compressed or encrypted.

What, an actually useful image in this blog? Didn’t see this one coming. Fortunately, my bad paintwriting makes it feel a bit less useful.

And after these trials, the Ubuntu image should finally install without further problems. Or at least it did for me, YMMV. Once you get the WSL up and running, I recommend playing around with it a bit if it’s not familiar to you yet. If you don’t know where to start, this video can be useful (About 3:40 onwards). To be honest, so far the biggest issue I’ve had with WSL is how simple it is, I was expecting it to be a lot more complex and cumbersome.

After getting the basics of WSL down it’s time to get started with Yocto. Well, almost. As you may guess, the Yocto project takes up disk space. A lot it. The official requirement at the time of writing is 50GB. However, as you may remember from the earlier chapters, I have barely 7GB free on the disk where the WSL image gets installed by default.

A bit of googling revealed two useful options to the Windows’ WSL command-line tool: export and import. These can be used to pack the image and then load it in a non-default location. unlink option could be used to get rid of the obsolete machine. So in short, the Powershell commands to be run on Windows are:

wsl --export Ubuntu E:\yoctotest\testDistro.tar
wsl --unlink Ubuntu
wsl --import UbuntuCustom E:\yoctotest\UbuntuCustom E:\yoctotest\testDistro.tar
#Or something similar, depends a bit on what you actually want and what drives you have

As you may have noticed at this point, all of my problems were caused by the lack of disk space on C: drive, so if you don’t have that problem I guess your experience will be a lot smoother.

Just to prove to you that I have learned absolutely nothing from this “out of disk space”-experience, here is a 1.5MB gif.

Finally, after getting the WSL image ready and in the correct location, it is time to properly start working on the Linux side of things. There’s one more thing to do before moving to Yocto: updating apt and installing zstd. I’m not sure why zstd isn’t listed in Yocto requirements, but my build failed without it. These commands can be run in the Ubuntu shell to get the things ready for Yocto:

sudo apt update
sudo apt install zstd

After that, it’s mostly just a matter of following the Yocto quick start, with the difference of building the core-image-minimal instead of core-image-sato. Sato-image is the GUI-image, but we don’t have graphics yet so it’s better to stick to core-image-minimal. In a nutshell, the steps for Yocto-build are:

  • Install dependencies (no need to reboot)
  • Clone poky-repository and check out the latest LTS (=long-term support) branch dunfell (interestingly, no reboot)
  • Source the build environment (don’t reboot, you’ll lose the sourced environment)
  • Build the core-image-minimal. Did I mention core-image-minimal yet?

Yocto-build with WSL will print a warning that you better be sure that you have enough disk space, but besides that it should work normally. With the default settings, the build will eat pretty much all of the resources your machine has (it’s been optimized that way), so don’t be afraid if things start to freeze a bit. Especially memory tends to be a bottleneck, partially because of WSL. However, I was able to watch a Youtube video most of the time while waiting for the thing to finish, so I guess it doesn’t render the computer completely useless. During the processing of the heavier components it may be better to do something with a video game console of your choice though.

Bleep bloop

And if the stars are aligned, the build will complete in an hour or two (or six)! The dunfell-branch is usually quite stable (as LTS branches should be), so the build shouldn’t be in a broken state. After the compilation is done, you can try out the generated image with the following command:

runqemu -nographic

runqemu command is Yocto’s wrapper for QEMU, which is an emulator that can be used to run the built images (and plenty of other things outside Yocto as well). runqemu helps to automatically find and set up the image for the emulator. -nographic option prevents loading a graphical shell in a separate window, as the graphics are not set up for WSL yet.

And that’s it! It is easy once you know how to do it. Like pretty much everything else I guess. But I was pleasantly surprised that Yocto officially supports building in WSL, so there was no need for (dirty) hacks. The memory seems to be a bit of a problem, but that’s always the case. In the next post there’ll hopefully be some graphics!

Part 2 is now available here!