audio programming

My First Plug-In: Pastel Distortion

It’s time to finish a project. Lately, I have been mostly interested in embedded tinkering, but I’m also fascinated by audio and DSP programming. Partially because it is an interesting field, but mostly because I make music as a hobby so it’s interesting to see how the virtual instruments and audio effects work. So, in this text I’m presenting my first full-fledged and complete VST plug-in, Pastel Distortion. In a way it’s my second plug-in, as I used to make Delayyyyyy plug-in (that’s mentioned in some older texts of this blog as well), but that project has been abandoned in a state that I can’t quite call complete. However, here’s a screenshot of something that I actually have completed:

tada.wav

In short, VST plug-ins are software used in music production. They create and modify the sound based on the information they’re given by the VST host, that is usually a digital audio workstation. Plug-ins are commonly chained together so that one plug-in’s output is connected to the next one’s input. This is all done real-time, while the music is playing.

There will be free downloads at the end, but first, let’s go through the history of the project, some basic theory, and a six-paragraph subchapter that I like to call “I’m not sponsored by JUCE, but I should be”.

History of the Project

About two and a half years ago I started working on a distortion VST plug-in following this tutorial. Half a year after that, I got distracted when I thought about testing the plug-in (which resulted in this blog text and my first conference talk). As a side note, it may tell something about the development process and schedule when testing is “thought about” six months after starting the project. A year after that I got a Macbook for Macing Mac builds and got distracted by the new shiny laptop. And some time after that I made some overenthusiastic plans for the plug-in that didn’t quite come to reality and then I forgot to develop the plug-in.

The timeline is almost as confusing as Marvel Multiverse and full of delays, detours and time loops. In the end, I’ve just come to a conclusion that I’ll release this Pastel Distortion as it is, and add the new cool features later on if there’s interest in the plug-in. If there’s no interest, I can start working on a new plug-in, so it’s a win-win situation. But let’s finish this first.

What Is Waveshaping Distortion?

The Physics

In the real world that surrounds us all, sound is a change of pressure in a medium. Our ears then receive these changes of pressure and turn them into some sort of electricity in the brain. In short, it’s magic, that’s the best way I can explain it. To translate this into the world of computers, a microphone receives changes of pressure in air and converts them into changes in electricity that an analogue-to-digital converter then turns into ones and zeros understood by a computer. Magic, but of a slightly different kind.

I asked AI to generate an infographic for this section. Hopefully this helps you to understand.

After this transformation, we can process the analogue sound in the digital domain, and then convert it back into an analogue signal and play it out from speakers. Commonly the pressure/voltage changes get mapped into numbers between some range. One common range is [-1.0, 1.0]. -1.0 and 1.0 represent the extreme pressure changes where the microphone’s diaphragm is at its limit positions (=receiving loud sound), while the value of 0 is the position where it receives no pressure (receiving= silence).

Well, I also tried drawing the information myself. I’m not sure which one is better.

The Maths

Now we’ve established what sound is. But what is waveshaping distortion? You can think of it as a function that gets applied to the sampled values. Let’s take an example function that does not actually do any shaping, y=x:

This is quite possibly the dullest shaping function. It takes x as an input, and returns it. However, this is in theory what waveshaping does. It takes the input samples from -1.0 to 1.0, puts them into mathematical function, and uses output for new samples. Let’s take another example, y=sign(x):

This takes an input sample and outputs one of the extreme values. You can emulate this effect by turning up the gain of a microphone, shoving the mic in your mouth, and screaming as loud as possible. It’s not really a nice effect. Finally, let’s take a look at a useful function, y=sqrt(x) where x >= 0, y=-sqrt(-x) where x < 0:

Finally, we get a function that does something but isn’t too extreme. This will create a sound that’s more pronounced because the quieter samples get amplified. Or, in other words, values get mapped further away from zero. The neat part is that the waveshaper function can be pretty much anything. It can be a simple square root curve like here. It also can be a quartic equation combined with all of the trigonometric functions (assuming your processor can calculate it fast enough). Maybe it doesn’t sound good, but it’s possible.

But Why Bother?

It’s always a good idea to think why something is done. Why would I want to use my precious processor time to calculate maths when I could be playing DOOM instead? As an engineer, I’m not 100% sure, I think it has something to do with psychoacoustics which is a field of science of which I know nothing about and to be honest it sounds a bit made up. From a music producer’s point of view, I can say that distortion effects make the sound have more character, warmth, and loudness (and other vague adjectives which don’t mean anything), so it’s a good thing.

Implementing the Distortion

I have talked about JUCE earlier in this blog, but I think that’s been so long ago that it’s forgotten. So I’ll summarize it shortly again. It’s a framework for creating audio software. It handles input and output routing, VST interfacing, user interface, and all that other boring stuff so that we can focus on what we actually want to do: making the computer go bleep-bloop.

The actual method of audio signal processing may vary between different types of projects. For a VST audio effect like this, there usually is a processBlock function that receives an input buffer periodically. It is then your duty as a plug-in developer to do whatever you want with that input buffer and fill it with values that you deem correct. Doing all this in a reasonable amount of CPU time, of course.

In this Pastel Distortion plug-in, we receive an input buffer filled with values ranging from -1.0 to 1.0, and then we feed those samples to the waveshaping function and replace the buffer contents with the newly calculated values. Sounds simple, and to be honest, that’s exactly what it is because JUCE does most of the heavy work.

JUCE has a ProcessorChain template class that can be filled with various effects to process the audio. There’s a WaveShaper processor, to which you simply give the mathematical function you want it to perform, and the rest is done almost automatically! As you can guess, the plug-in uses that. In the plug-in there are also some filters, EQs, and compressors to tame the distorted signal a bit more because the distortion can start to sound really ugly really quickly. That doesn’t mean that you can’t create ugly sounds with Pastel Distortion, quite the contrary.

The life of a designer is a life of fight: fight against the ugliness

Another great feature of JUCE is that it has a graphics library built-in. It’s especially good in a sense that an embedded developer like me can create a somewhat professional-looking user interface, even though I usually program small devices where the only human-computer interaction methods are a power switch and a two-colour LED. Although I have to admit, most of the development time went into making the user interface. You wouldn’t believe the amount of hours that went into drawing these little swirls next to the knobs.

Honestly, it was pure luck that I managed to get these things looking even remotely correct. The best part is that in the end they’re barely even visible.

All in all, Pastel Distortion is a completed plug-in that I think is quite polished (at least considering the usual standards for my projects). There’s the distortion effect of course, but in addition to that there’s tone control to shape the distortion and output signal, a dry-wet mixer for blending the distorted and clean signal, and multiple waveshape functions to choose from. Besides GUI, I also spent quite a lot of time tweaking the distortion parameters, so hopefully that effort can be heard in the final product.

There’s still optimization that could be done, but the performance is in fairly good shape already. At least compared to the FL Studio stock distortion plug-in Disructor it seems to have about the same CPU usage. Disructor averages at around 8%, while Pastel Distortion averages at 9%. Considering the fact that my previous delay plug-in used about 20% I consider this a great success. This good number is most likely a result of the optimizations in JUCE and not because of my programming genius.

But enough talk, let’s get to the interesting stuff. How to try this thing out?

Getting Pastel Distortion

Obtaining Pastel Distortion plug-in is quite easy. Just click this link to go to the Gumroad page where you can get it. And if you’re quick, you can get it for free! The plug-in costs $0 until the end of February 2024. After that you can get the demo version for free to try it out, or if you ask me I can generate some sort of a discount code for it (I’d like to get feedback on the product in exchange for the discount).

If you don’t want to download Pastel Distortion but want to see it in action, check out the video below. I put all the skills I’ve learned from Windows Movie Maker and years of using Ableton into this one:

That’s all this time. I’ve already started working on the next plug-in, let’s hope that it won’t take another two and a half years. Maybe the next text will be out sooner than that when I get something else ready that’s worth writing about. I’ve been building a Raspberry Pi Pico-based gadget lately, and it got a bit out of hand, but maybe I’ll finish that soon.

Aioli Audiostreamer: Moving the Sound

AI generated picture of an amplifier with raspberries

People need projects to consume their free time. I’ve lately felt that I want to actually try finishing a project (instead of just starting them), and that the project should be somehow related to audio, and it would be nice if it would have a real-world use, and it would use the old Raspberry Pis that have lying around. Plenty of requirements then. I think this is still better a better-formulated train wreck than an average customer project.

After considering few different options, I ended up attempting to create a multi-speaker streaming system named Aioli (so yeah, I started another project). This text is closer to a devlog than tutorial, but there will be open source code repositories in case you want to see how it’s done. Enough with the blabber, let’s move on.

Overview Of The Project

Basically, in this project I want to have one audio source, and the audio from that single source would get wirelessly transmitted to multiple speakers. To be more specific, in my case there’s one Raspberry Pi 4 connected to an external audio source, and then there would be other Raspberry Pi 2’s connected to the speakers. The Raspberry Pis in this scenario would handle at least streaming, networking, receiving the audio, and playing the audio. This graph attempts to explain the situation:

To start out the project I decided to focus on the streaming between Raspberry Pis because I didn’t feel masochistic enough to start working with Bluetooth yet. Everything is all fun and games until Bluetooth is added into the mix, and I want to have a bit of fun and games.

Today’s focus is this part to be exact

Obviously, the first thing to do is to create a custom Yocto distro, because every self-respecting hobby project needs its own Linux distribution. Perhaps further down the line this distro can contain some useful configs and other things that actually justify its existence, but for now, it’s just a renamed example Poky distro.

Creating The Network Of Raspberries

To get the Raspberry Pis talking to each other the first step is getting the devices connected to the same LAN. I wanted to use WLAN to not have cables around the house. Using ethernet cables would defeat the whole point of the system anyway as I could just use audio cables then. I considered also an ad hoc network but decided to use WLAN to keep things familiar for now. The Raspberry Pi 4 I own does have an internal Wifi chip, so that was easy to sort out, but the two Raspberry Pi 2’s did not. I had one Wifi dongle that worked out of the box, but another dongle required some extra work. You can read about it from my previous blog post if you’re interested.

After getting the hardware sorted out it was time to get the devices actually connected to WLAN. For that purpose, I added wpa_supplicant to the distro. wpa_supplicant is a program that in layman’s terms “connects the device to wifi” (or so I’ve understood as a layman). A properly configured supplicant that launches during boot should in theory automatically connect the device to WLAN. Surprisingly enough, it usually does. Following simple configuration in /etc/wpa_supplicant.conf added during a build to a Raspberry Pi does the trick:

network={
    ssid="WLANname"
    psk="SecretPassword"
}

This of course means that you have a statically defined network you want to connect to, and the password is stored in plaintext in the device. Both are bad things for different reasons, but they’ll do for now because it’s the simplest solution. This simplicity will be fixed later on in another text. If you have a WLAN network without a password or want to use a calculated key instead of a plaintext password, you can read more about wpa_supplicant from Arch wiki. It’s a good read. Pay attention to the quotation marks in psk-variable, they caused a lot of headache to me.

With quotation marks the value is a plaintext password, without them it’s a calculated key value. Makes “sense”.

After the devices wirelessly connected to the router, I gave them static IP address leases to make the development somewhat easier. I also ran a quick ping test to check that the Raspberry Pis can reach Google and each other before proceeding.

Moving The Audio Bits

Making the audio streaming work was actually fairly simple because there already is an open-source solution, as there usually is. GStreamer is an “open source multimedia framework”, which can mean many things. This is quite fitting because GStreamer does many things, and with the help of its plugins, it can do pretty much anything you can dream of. Assuming your dreams revolve around handling and processing multimedia.

My dream was to find a way to stream audio over IP network. And dreams, they sometimes do come true. Actually, a bit too much so, it was slightly difficult to find the best options for streaming the audio with all encoding options, protocols, and what-not. I’m still not sure I picked the right things. To keep the prototyping fast I worked with command line tools provided by GStreamer (as opposed to using the API, which may be worth looking into in the future).

GStreamer works with pipelines. Pipelines have sources where the media originates from, sinks where the media ends up, and parsers, encoders, and other types of things in between that manipulate input and pass it forward. For example, here’s a simple pipeline that reads an audio file, and then parses, converts, resamples, and outputs it to the appropriate default sink:

gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! autoaudiosink

This command may result in a sound being output from your speakers. Quite often not. Depends on what your default ALSA output device is, if you’re using PulseAudio, and if it’s the third Tuesday of the month. In the case of Raspberry Pi, the default output device is the HDMI audio, and I’m not using PulseAudio, and it’s not the third Tuesday today, meaning that I actually got some sound out from a television connected to the HDMI port. If you want to get the audio output from the Raspberry Pi’s headphone jack, you can be a bit more specific about the sink:

gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! alsasink device=hw:1
# use "aplay -l" command to list the available ALSA devices

To get the audio sent over the network we can use the RTP protocol that’s meant for delivering audio and video. Basic GStreamer functionality can be easily extended with plugins, and as it turns out, there exists a plugin for RTP. It’s weird how these things work out nicely. Almost like someone has had the same ideas before me. Now we can package the audio to 16-bit RTP payloads, and instead of using an alsasink we can use a udpsink (from another plugin) to output the stream to a target in a network instead of an audio device.

gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! rtpL16pay ! udpsink host=192.168.1.182 port=5001

Then, the intended receiver of the stream can use udpsrc instead of filersrc to read the stream, decode, and deliver the contents to its own audio sink. Simple as.

gst-launch-1.0 udpsrc port=5001 ! 'application/x-rtp,media=audio,payload=96,clock-rate=44100,encoding-name=L16,channels=2' ! rtpL16depay ! audioconvert ! autoaudiosink

To get the audio sent to multiple devices, a multiudpsink can be used on the sending side. The receiving end still uses the same command:

gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! rtpL16pay ! multiudpsink clients=192.168.1.182:5001,192.168.1.183:5001

In theory, we could use multicast streaming instead of multiple streams but for some reason I couldn’t get it working. Most likely it had something to with the third Tuesday of the month. I couldn’t even complete a simple multicast test on my network of Raspberry Pis, so I guess something is wrong with my setup. For the sake of completeness, AFAIK these commands (should (in theory)) work, but don’t. I’ll look into this later on because multicasting seems like a more sensible approach to this problem:

# Controller command
gst-launch-1.0 filesrc location=/opt/sample-files/sample1.wav ! wavparse ! audioconvert ! audioresample ! rtpL16pay ! udpsink host=224.1.1.1 auto-multicast=true port=3000

# Speaker command
gst-launch-1.0 udpsrc multicast-group=224.1.1.1 auto-multicast=true port=3000 ! 'application/x-rtp,media=audio,payload=96,clock-rate=44100,encoding-name=L16,channels=2' ! rtpL16depay ! audioconvert ! autoaudiosink
Considering the amount of multicast memes floating around the internet, I’m not the only one having issues with it.

By using these commands we can send the audio over network from the controller device to the speaker devices. However, this is still a bit cumbersome, because we need to manually run the gst-launch-1.0 commands, figure out the intended receivers & their IP addresses, and so on. Later on I plan to introduce a manager process that’s dynamically able to find the clients in LAN and control the streaming, but that’s a topic for another text.

There’s a recipe for GStreamer and its plugins in Yocto, so to get these things installed into the new custom distro is just a matter of adding a few packages. It’s almost simpler than using a package manager. At least if you’ve spent the last five years learning the ins and outs of Yocto, and don’t need to install them during runtime. Something like this should do the trick:

IMAGE_INSTALL:append = " \
    gstreamer1.0 \
    gstreamer1.0-meta-base \
    gstreamer1.0-meta-audio \
    gstreamer1.0-plugins-good-udp \
    gstreamer1.0-plugins-good-rtp \
"

Plugins are sorted to good, bad, and ugly (I guess it’s no big surprise that bluez plugin is “bad”). To figure out which group plugin belongs to you can check the documentation. The documentation is quite good by the way, I recommend reading it. For example, udp plugin page contains information about the pipeline elements it provides, and also mention which group it belongs to.

That mostly covers all for this text. We’re now able to send sound over the network from one device to another. Next time we’ll stop this goofing off, and get painfully serious by adding Bluetooth to the system, and instead of using sample audio files we’ll actually stream something from a phone.

You can find the top-level repo-tool manifest repository from here. Please note that the progress of the project is a bit further than what’s presented in this blog text, and the progress is also “a bit” all over the place, so the manifest repository and the subrepositories contain plenty of spoilers and confusion.

One more question remains: why the name Aioli? Well, it kinda sounds like audio combined with I/O, and I like garlic flavoured condiments. That’s as good reason as any.

Unit testing audio processors with JUCE & Catch2

Testing. The final frontier. Or so it often feels. It’s the place where no man boldly goes, it’s the place where they tend to crawl to when being forced to do so after the test coverage checker lets them know that there’s less than 60% of the line coverage. Hopefully, this is just a tired stereotype, because testing is mighty useful in all kinds of applications for quite obvious reasons: it saves time, helps catch bugs, and a good & passing test set gives a better peace of mind than a ten-minute mindfulness session. In this text, I’m writing a bit about unit testing in audio processing applications and presenting my small example plugin that contains some unit tests. It’s basically my upcoming ADC 2022 talk in a text format, unless something surprising happens in the upcoming week. Let’s hope not, I react poorly to surprises.

Surprise Seinfeld GIF - Find & Share on GIPHY

Testing

Unit testing is a type of testing where small sections of the program code are individually and independently scrutinized. Usually, this is done by the means of feeding known input to the functions being tested, and comparing the outputs, function calls, etc. against an expected set of values. Basic stuff, really.

However, audio applications are slightly different, or at least have some features that don’t fully follow this idea of known inputs and outputs. For example, think about a high-pass filter that adds some “character” to the signal it filters. The character here means something extra being added to the audio signal during processing. Depending on the type of “character” we may know the output only partially. Sure, the audio gets filtered according to the filter parameters, but the processing may also add some random elements, like a tiny smidgen of noise to keep things exciting. In such a situation we can’t directly compare the output of the processing function to some expected value, because the expected value is only partially known.

Of course, the test in such a case could be reduced to smaller components that have fully predictable outputs. And it could even be argued that testing these small components is the whole idea of unit testing. But even if we break down the filter into the smallest possible pieces, there still would be that one noise generator element that can’t be fully predicted (unless using the same random seed every time). How can we know what to expect when we only have a general idea of what something is supposed to sound like, but it can’t be exactly defined? As a sidenote, this kind of vague hand-waving describes my music-making fairly well.

Arm Flailing GIF - Find & Share on GIPHY
Basically me whenever I’m asked to explain anything (please don’t ask difficult questions)

Math to the rescue

I dislike math as much as anyone else, but no one can really disagree with the fact that it’s definitely useful. Without it, we wouldn’t have trebuchets. And from what I’ve heard, mathematics plays some role in most of the computer systems as well.

Fast fourier transform is a magica… mathematical algorithm that can perform discrete fourier transformations. Fourier transformation on the other hand can convert time or space functions to their frequency components. Or, if these words mean nothing to you, it can convert this kind of waveform image (i.e. time domain representation):

Difficult to say what it sounds like, but at least it’s loud.

Into this kind of frequency spectrum image (frequency domain representation):

Ah, it’s a sine sweep.

How does this happen? I wish I understood. Wikipedia article for Fourier transform is filled with sentences like “for each frequency, the magnitude of the complex value represents the amplitude of a constituent complex sinusoid with that frequency, and the argument of the complex value represents that complex sinusoid’s phase offset” and to be honest, sentences like this make me scared. For all I know, it could say that there’s a tiny wizard somewhere in the computer that has nothing better to do than some signal processing. Although, all the signal-processing people I know tend to be wizards, so maybe there’s some truth in that.

However, with the FFT we can get the characteristics of the audio, like what frequency range has a lot of energy, or where there is no energy at all. Audio signals are signals, and as such, they have some amount of energy. These kinds of frequency characteristics can be compared in non-exact, fuzzy ways by setting thresholds for the allowed energy amounts for different frequency ranges.

For example, if we have a simple gain function, we can use FFT to check that the audio signal energy is indeed increasing or decreasing. Or if there’s a filter that needs to be tested we can check that some frequency bands have no energy to ensure that the filter doesn’t allow frequencies to pass from where they shouldn’t. Or if there’s a white noise generator, it can be checked that there is energy across the whole frequency range. Or with a synthesizer, we can check that there are no unexpected artefacts in the signal. Without fast fourier transform finding out these kinds of things automatically can be tricky.

Pamplejuce

Integrating a unit testing framework with an audio development framework can be challenging. It took me surprisingly much googling to find anything useful on the topic. There were plenty of tutorials/advertisements about building different unit testing framework demos. I’m using JUCE, and it has some level of unit testing support, but that seemed insufficient. There also were conference talks confirming that yes, you indeed should test your software even if it handles audio signals, that’s not an acceptable reason to skip testing even though that’s a tempting thought.

Finally, after failing to do what I wanted with JUCE’s unit tests, and failing to integrate two different unit testing frameworks into my JUCE plugin, I came across Pamplejuce. It’s a template project using CMake workflow, and more importantly (at least for me), it integrates Catch2 unit testing framework to JUCE. It’s good that someone around here is a competent developer.

Catch2 is a unit testing framework. I don’t have much else to say about it. It’s being actively developed and it has all the features I could ask for. This isn’t all that much, I have about as many requirements for unit testing frameworks as I do for hotel rooms: hotel rooms should have a bed, a shower and a lockable door. Unit testing frameworks should have the possibility to compare two things and an option to write custom matchers. And I should be able to integrate them into my projects. The last one was surprisingly rare.

90S Internet GIF - Find & Share on GIPHY
I couldn’t yet find “Unit testing framework integration for dummies”, but once I find one I’ll be sure to get one.

FilterUnitTest

To test out Pamplejuce I created a new imaginatively named project FilterUnitTest from the template. To have some actual audio processing to test I made a simple highpass filter plugin using JUCE’s LadderFilter. No GUI, just simple filtering pleasure. Well, it also has a parameter for the wet amount. I’ll give a warning that there’s still plenty of refactoring & warning fixing to be done in the FilterUnitTest, but I’m currently working on it whenever I have the time. I’m quite certain that the tests at least leak memory, but I haven’t yet gotten around to fixing that (I know, I know, it isn’t very C++17 to have memory leaks).

So, after the base of the project was done, it was time to finally implement some tests. First is the dummy test already part of the Pamplejuce template, which simply checks that the name of the plugin can be fetched and that it is actually the expected name. The best test is the one someone writes for you. The test cases presented in this blog text can be found from Tests/PluginBasics.cpp file in the FilterUnitTest repository.

TEST_CASE("Plugin instance name", "[name]")
{
    testPluginProcessor = new AudioPluginAudioProcessor();
    CHECK_THAT(testPluginProcessor->getName().toStdString(),
               Catch::Matchers::Equals("Filter Unit Test"));
    delete testPluginProcessor;
}

To verify the actual functionality of the program, I identified two things that need to be tested:

  1. Testing that the filter performs filtering to the audio signal
  2. Testing that the wet parameter controls the amount of applied filtering

The first test is quite simple: create an audio signal buffer, or read it from an audio file. Run it through the audio processing function, and check that the output contains less energy than before filtering. This of course requires writing a custom matcher, more on that a bit later. If you’ve set your filtering function in stone, you could consider storing the output from one of the test runs and comparing the test results to that instead. During the development phase, it may be easier to do some non-exact comparisons of the FFT values.

If you choose to go the exact value comparison route, you can check out the writeBufferToFile and readBufferFromFile helper functions in Tests/Helpers.h. They serialize and deserialize an audio buffer to/from a file. These helpers can be used to create the exact expected values, and they can also be used to fetch the expected value and compare the output to it. This dummy test basically writes a random buffer to a file, reads the file and ensures that the two buffers have identical contents.

TEST_CASE("Read and write buffer", "[dummy]")
{
    juce::AudioBuffer<float> *buffer = Helpers::generateAudioSampleBuffer();
    Helpers::writeBufferToFile(buffer, "test_file");
    juce::AudioBuffer<float> *readBuffer = Helpers::readBufferFromFile("test_file");
    CHECK_THAT(*buffer,
               AudioBuffersMatch(*readBuffer));
    juce::File test_file ("test_file");
    test_file.deleteFile();
}

As you can see, this type of test requires a custom matcher, AudioBuffersMatch. As does the FFT comparison, and any other custom comparison. For FilterUnitTest, I wrote four different types of comparators, these can be found from Tests/Matchers.h:

  • Audiobuffers are equal
  • Audiobuffer has higher energy than another audio buffer
  • Audiobuffer has a maximum energy of N in its frequency bands (N can vary between different bands, and the check for a band can also be skipped)
  • Audiobuffer has minimum energy of N in its frequency bands (Same here)

The second approach of using FFT to ensure that the audio buffer has lower energy after filtering can use a combination of the second and third matcher. By combining these two, we can ensure that the total energy of the signal is indeed lower and that the amount of lower frequencies is within a certain limit:

TEST_CASE("Filter", "[functionality]")
{
    int samplesPerBlock = 4096;
    int sampleRate = 44100;

    testPluginProcessor = new AudioPluginAudioProcessor();

    //Helper to read a sine sweep wav
    juce::MemoryMappedAudioFormatReader *reader = Helpers::readSineSweep();
    juce::AudioBuffer<float> *buffer = new juce::AudioBuffer<float>(reader->numChannels, reader->lengthInSamples);
    reader->read(buffer->getArrayOfWritePointers(), 1, 0, reader->lengthInSamples);

    juce::AudioBuffer<float> originalBuffer(*buffer);

    //Dismiss the partial chunk for now
    int chunkAmount = buffer->getNumSamples() / samplesPerBlock;

    juce::MidiBuffer midiBuffer;

    testPluginProcessor->prepareToPlay(sampleRate, samplesPerBlock);

    //Process the sine sweep, one chunk at a time
    for (int i = 0; i < chunkAmount; i++) {
        juce::AudioBuffer<float> processBuffer(buffer->getNumChannels(), samplesPerBlock);
        for (int ch = 0; ch < buffer->getNumChannels(); ++ch) {
            processBuffer.copyFrom(0, 0, *buffer, ch, i * samplesPerBlock, samplesPerBlock);
        }

        testPluginProcessor->processBlock(processBuffer, midiBuffer);
        for (int ch = 0; ch < buffer->getNumChannels(); ++ch) {
            buffer->copyFrom(0, i * samplesPerBlock, processBuffer, ch, 0, samplesPerBlock);
        }
    }

    //Check that originalBuffer has higher total energy
    CHECK_THAT(originalBuffer,
               !AudioBufferHigherEnergy(*buffer));

    juce::Array<float> maxEnergies;
    for (int i = 0; i < fft_size / 2; i++) {
        //Set the threshold to some value for the lowest 32 frequency bands
        if (i < 32) {
            maxEnergies.set(i, 100);
        }
        //Skip the rest
        else {
            maxEnergies.set(i, -1);

        }
    }
    //Check that lower end frequencies are within limits
    CHECK_THAT(*buffer,
               AudioBufferCheckMaxEnergy(maxEnergies));

    //I guess programming C++ like this in the year 2022 isn't a good idea to do publicly
    delete buffer;
    delete reader;
    delete testPluginProcessor;
}

The second test for testing the wet parameter is basically a continuation of this. Get your audio buffer and run it through the audio processing function with varying levels of wet-parameter. Ensure that the higher the wet parameter is, the higher the filtering effect. This means there’s again less low-end energy, and less energy in general. Or if you want to do a super simple test as I did, just check that with a wet value of 0 signal doesn’t change, and with the max wet parameter value of 1 it does.

TEST_CASE("Wet Parameter", "[parameters]")
{
    testPluginProcessor = new AudioPluginAudioProcessor();
    //Helper to generate a buffer filled with noise
    juce::AudioBuffer<float> *buffer = Helpers::generateAudioSampleBuffer();
    juce::AudioBuffer<float> originalBuffer(*buffer);    

    juce::MidiBuffer midiBuffer;

    testPluginProcessor->prepareToPlay(44100, 4096);
    testPluginProcessor->processBlock(*buffer, midiBuffer);

    //Check that initial value of wet is not zero, i.e. filtering happens
    CHECK_THAT(*buffer,
               !AudioBuffersMatch(originalBuffer));

    delete buffer;

    buffer = Helpers::generateAudioSampleBuffer();

    //Get and set parameter
    auto *parameters = testPluginProcessor->getParameters();
    juce::RangedAudioParameter* pParam = parameters->getParameter ( "WET"  );
    pParam->setValueNotifyingHost( 0.0f );

    for (int ch = 0; ch < buffer->getNumChannels(); ++ch)
        originalBuffer.copyFrom (ch, 0, *buffer, ch, 0, buffer->getNumSamples());
    testPluginProcessor->processBlock(*buffer, midiBuffer);

    //Check that filter now doesnt affect the audio signal
    CHECK_THAT(*buffer,
               AudioBuffersMatch(originalBuffer));

    delete buffer;

    buffer = Helpers::generateAudioSampleBuffer();
    pParam->setValueNotifyingHost( 1.0f );


    for (int ch = 0; ch < buffer->getNumChannels(); ++ch)
        originalBuffer.copyFrom (ch, 0, *buffer, ch, 0, buffer->getNumSamples());
    testPluginProcessor->processBlock(*buffer, midiBuffer);

    //Finally, check that with max wet the signal is again affected
    CHECK_THAT(*buffer,
               !AudioBuffersMatch(originalBuffer));

    delete buffer;
    delete testPluginProcessor;
}

I wish I could argue which approach is better here, but I think it’s quite apparent whether or not it’s better to do proper or half-assed testing. I’ll leave it as an exercise for the reader to figure out how to do the proper way of testing.

Hey Arnold Nicksplat GIF - Find & Share on GIPHY

Images

If you’re given the following sample values, can you figure out what the audio signal sounds like?

[0.25, -0.59, 0.96, 0.21, -0.22, -0.36, -0.45, -0.14, 0.39, 0.35, 0.87, 0.64, -0.32, 0.12, -0.86, -0.67], repeated ad nauseam

I don’t know either. In general, humans tend to absorb information via visual means. A bunch of decimal numbers isn’t the most intuitive way of understanding something unless you’re one of the wizards I talked about earlier. But what if I showed you this:

Seems like it’s a signal of sorts.

As you can see (pun intended) visual information is a lot easier to digest. It’s not the most impressive of graphs really, nor the easiest to read, but there’s a good explanation for that: I wrote the code for it. At least the code allows easy drawing of images as a part of unit tests to see what’s going on with the inputs and outputs, as opposed to printing out audio buffer contents and FFT results and hoping that staring at the numbers absorbs them into the brain. You can find the image drawing function from Tests/ImageProcessing.h, here it is in action:

juce::AudioBuffer<float> *buffer = Helpers::generateBigAudioSampleBuffer();
ImageProcessing::drawAudioBufferImage(buffer, "NoiseBuffer");

Just give it a buffer and a filename without the .png extension and it’ll handle the rest. So, for example to make sure you’ve hooked things up correctly in your testing set-up, you can call the drawing function before and after doing some processing to the audio signal to see if the changes are at least somewhat sensible.

As you can guess, this was one of the earlier attempts of getting things working.

Benchmarking

Imagine you’re in an all-you-can-eat buffet. You can eat whatever as much as you want and the staff won’t kick you out for a few hours. Usually eating a lot feels like a good idea. However, after you’ve stopped eating, the reality of the situation settles in and you realize that it wasn’t a good idea. You feel sick.

The same applies to coding. It’s fun to code without limits. More features, more, MORE, you’ll think to yourself. However, after coding for a while you’ll realize that this wasn’t a good idea either. The program starts to become slow and sluggish. You’ve introduced latency to your code, and that is the second worst thing that can be done. The only thing worse is a 127 dBFS pop that was caused by careless buffer handling when you were starting out with audio signal processing.

To keep things in check, Catch2 has some simple benchmarking macros. There are a few example usages of those in FilterUnitTest-repo. It’s quite basic C++, meaning that it took me about three compilation attempts and one illegal memory access to get the syntax right. After some trials and a lot of errors, I ended up with something like this:

TEST_CASE("Processblock Benchmark", "[benchmarking]")
{
    testPluginProcessor = new AudioPluginAudioProcessor();
    juce::AudioBuffer<float> *buffer = Helpers::generateAudioSampleBuffer();

    juce::MidiBuffer midiBuffer;

    testPluginProcessor->prepareToPlay(44100, 4096);

    //Example of an advanced benchmark with varying random input
    BENCHMARK_ADVANCED("Plugin Processor Processblock ADVANCED")(Catch::Benchmark::Chronometer meter) {
        juce::Array<juce::AudioBuffer<float>> v;
        for (int j = 0; j < meter.runs(); j++) {
            v.add(*Helpers::generateAudioSampleBuffer());
        }
        meter.measure([&v, midiBuffer] (int i) mutable { return testPluginProcessor->processBlock(v.getReference(i), midiBuffer); });
    };

    delete buffer;
    delete testPluginProcessor;
}
Im Smart Parks And Recreation GIF - Find & Share on GIPHY
Attempting to read C++ documentation on lambdas and pretending to understand what’s going on.

Closing words

I hope you found this blog post useful. As mentioned, I was struggling to get started with JUCE and unit testing, so hopefully this writing helps you to think about how to test your application, assists in integrating a unit testing framework, and contains some useful and practical resources to get you started with testing. Also, I want to say that this type of FFT matching isn’t the only solution for unit-testing audio applications. You can for example remove the random elements from your tests, use pre-determined random seeds, or mock some parts of your code if needed. I’ve just found the FFT approach really intuitive and flexible after I got my head wrapped around it. Thanks for reading!

Comments are welcome but due to a quite hefty amount of bot spam, the comments will go through moderation so it may take some time to see your prose in the comments section. As long as you’re not trying to sell Viagra or women’s haircuts to me it’ll eventually appear there. If you happen to be going to ADC 2022 feel free to let me know!

GIF by South Park  - Find & Share on GIPHY
I don’t know the context of this gif, but knowing it’s South Park it must be something very nice and wholesome.