Skip to main content

RockSynth Part 3: Writing a MIDI Controlled Synthesizer on the ROCK 4 C+

In the third and final part of this article series, we'll take the basic (but playable) synth we had before, and make it musical with the addition of a filter and chorus effect. There's a few more C++ classes to introduce and a bit more audio theory - the GitHub repository is available here, and if you want to follow along but haven't seen the previous entries, part one covers the initial audio I/O setup on the Okdo ROCK 4 C+  (230-6199) , and part two covers MIDI input and putting the whole payable synth together.

The VCF

The VCF, or voltage-controlled filter, is a core part of a subtractive synthesiser. We can filter out high frequencies and control that cutoff point with an envelope to shape the sound. There are different types of filters and different ways to code them, however, I've opted to use a biquad filter with the implementation based on the equations described in the Audio EQ Cookbook, originally written by Robert Bristow-Johnson.

The flow graph of the mast straight-forward implementation of a biquad filter, known as "direct form 1", is as follows:

Flow graph of a digital biquad filter in direct form 1

z-1 is a unit delay, or simply a delay of a single sample. The difference equation to describe it is:

Difference equation of a digital biquad filter in direct form 1

Where y is our buffer of output samples, x is our buffer of input samples, n is the sample index, and a0 to a2 and b0 to b2 are our coefficients. The cookbook describes how to calculate these coefficients along with some intermediate variables, given our sample rate Fs, our cutoff frequency f0, and the filter quality (how steep the filter will be, and how high its resonant peek will be) Q:

Calculation of intermediate variable omega for digital biquad filter coefficientsCalculation of intermediate variable alpha for digital biquad filter coefficients

Calculation of coefficients a0, a1, and a2 for a digital biquad filterCalculation of coefficients b0, b1, and b2 for a digital biquad filter

With this, we can start converting this to a C++ class. Fret not about the scary maths symbols, this will pretty much be a 1 to 1 translation into code.

I made a new file, BiquadFilter.hpp, in the Synth subdirectory of the soure folder. The class declaration looks like this:

#ifndef BIQUAD_FILTER_HPP
#define BIQUAD_FILTER_HPP

#include <cstdint>

class BiquadFilter
{
public:
    void prepare(uint32_t sampleRate);
    float getNextSample(float inputSample);

    void setCutoffFrequency(float cutoffFrequency);
    void setQ(float q);

private:
    void calculateCoefficients();

    float m_a0, m_a1, m_a2;
    float m_b0, m_b1, m_b2;

    float m_xz1{0.0f}, m_xz2{0.0f};
    float m_yz1{0.0f}, m_yz2{0.0f};
    float m_sampleRate;

    float m_cutoffFrequency{20000.0f};
    float m_q{1.0f};

};

#endif

We keep track of the coefficients that will be calculated whenever the cutoff frequency or quality are changed, as well as the unit delay samples needed as part of the equation. I made BiquadFilter.cpp in the same directory, and defined these functions as follows:

#include "BiquadFilter.hpp"

#include <cmath>
#include <numbers>

void BiquadFilter::prepare(uint32_t sampleRate)
{
    m_sampleRate = static_cast<float>(sampleRate);
    calculateCoefficients();
}

float BiquadFilter::getNextSample(float inputSample)
{
    auto result = m_b0*inputSample + m_b1*m_xz1 + m_b2*m_xz2 - m_a1*m_yz1 - m_a2*m_yz2;
    result /= m_a0;

    m_xz2 = m_xz1;
    m_xz1 = inputSample;
    m_yz2 = m_yz1;
    m_yz1 = result;

    return result;
}

void BiquadFilter::setCutoffFrequency(float cutoffFrequency)
{
    m_cutoffFrequency = cutoffFrequency;
    calculateCoefficients();
}

void BiquadFilter::setQ(float q)
{
    m_q = q;
    calculateCoefficients();
}

void BiquadFilter::calculateCoefficients()
{
    auto omega = 2.0f * std::numbers::pi_v<float> * (m_cutoffFrequency / m_sampleRate);
    auto alpha = std::sin(omega) / (2.0f * m_q);
    auto cosOmega = std::cos(omega);

    m_a0 = 1.0f + alpha;
    m_a1 = -2.0f * cosOmega;
    m_a2 = 1.0f - alpha;

    m_b0 = (1.0f - cosOmega) / 2.0f;
    m_b1 = 1.0f - cosOmega;
    m_b2 = (1.0f - cosOmega) / 2.0f;
}

This is a single filter that can operate on a single stream of samples - the actual VCF that we plug into one of our synth voices will need to have an ADSR envelope to control the cutoff, as well as a single filter for each channel it's expecting, so two for stereo, for example. Also in the Synth subdirectory of the source folder, I made Vcf.hpp and Vcf.cpp, containing the following:

#ifndef VCF_HPP
#define VCF_HPP

#include "Adsr.hpp"
#include "../Audio/AudioProcessor.hpp"
#include "BiquadFilter.hpp"

#include <vector>

class Vcf : public AudioProcessor
{
public:
    Vcf(size_t numChannels);
    ~Vcf() override = default;

    void prepare(uint32_t sampleRate) override;
    void process(AudioBuffer& bufferToFill) override;

    void setCutoffFrequency(float cutoffFrequency) noexcept;
    void setQ(float q) noexcept;

    void noteOn() noexcept;
    void noteOff() noexcept;

    template<Adsr::Phase ParamType> requires (ParamType != Adsr::Phase::Idle)
    void setAdsrParam(float value)
    {
        m_adsr.setParam<ParamType>(value);
    }

private:
    float m_cutoffFrequency{20000.0f};

    Adsr m_adsr;
    std::vector<BiquadFilter> m_filters;
};

#endif
#include "Vcf.hpp"

Vcf::Vcf(size_t numChannels) : m_filters(numChannels)
{

}

void Vcf::prepare(uint32_t sampleRate)
{
    for (auto& filter : m_filters) {
        filter.prepare(sampleRate);
    }

    m_adsr.prepare(sampleRate);
}

void Vcf::process(AudioBuffer& bufferToFill)
{
    for (size_t sample = 0; sample < bufferToFill.bufferSize(); sample++) {
        auto adsrLevel = m_adsr.getNextValue();
        auto currentCutoff = std::max(m_cutoffFrequency * adsrLevel, 20.0f);

        for (size_t channel = 0; channel < bufferToFill.numChannels(); channel++) {
            m_filters[channel].setCutoffFrequency(currentCutoff);
            auto input = bufferToFill.getSample(channel, sample);
            auto result = m_filters[channel].getNextSample(input);
            bufferToFill.setSample(channel, sample, result);
        }
    }
}

void Vcf::setCutoffFrequency(float cutoffFrequency) noexcept
{
    m_cutoffFrequency = cutoffFrequency;
}

void Vcf::setQ(float q) noexcept
{
    for (auto& filter : m_filters) {
        filter.setQ(q);
    }
}

void Vcf::noteOn() noexcept
{
    m_adsr.noteOn();
}

void Vcf::noteOff() noexcept
{
    m_adsr.noteOff();
}

The Vcf class implements AudioProcessor so it can deal with audio buffers. It takes the expected number of channels in its constructor so it can fill a vector with the required amount of filters, exposes the public setter functions for its envelope and filters, as well as note on and off calls. The process() function updates the envelope and sets the current cutoff frequency with the envelope applied. Now, we can make an instance of this class a member of our synth voices, and update the SynthVoice class to match. Over in SynthVoice.hpp, we need to add the member variable and some public functions, as well as a constructor to pass on the number of channels to the filter:

...

class SynthVoice : public AudioProcessor
{
public:
    SynthVoice(size_t numChannels);
    ~SynthVoice() override = default;

    ...

    template<Adsr::Phase ParamType> requires (ParamType != Adsr::Phase::Idle)
    void setVcfAdsrParam(float value) noexcept
    {
        m_vcf.setAdsrParam<ParamType>(value);
    }

    void setCutoffFrequency(float cutoffFrequency);
    void setQ(float q);

    ...

private:
    uint8_t m_currentNote{0};
    float m_currentVelocity{0.0f};

    Adsr m_adsr;
    Vcf m_vcf;
    std::array<Oscillator, 2> m_oscillators;
    std::array<float, 2> m_oscillatorVolumes{0.5f, 0.5f};
};

#endif

And add definitions for the new functions as well as update the prepare(), process(), noteOn(), and noteOff() functions in SynthVoice.cpp:

...

SynthVoice::SynthVoice(size_t numChannels) : m_vcf(numChannels)
{

}

void SynthVoice::prepare(uint32_t sampleRate)
{
    m_adsr.prepare(sampleRate);
    m_vcf.prepare(sampleRate);
    for (auto& osc : m_oscillators) {
        osc.prepare(sampleRate);
    }
}

void SynthVoice::process(AudioBuffer& bufferToFill)
{
    auto numChannels = bufferToFill.numChannels();
    auto bufferSize = bufferToFill.bufferSize();
    AudioBuffer mix{numChannels, bufferSize};

    for (size_t sample = 0; sample < bufferSize; sample++) {
        auto adsrLevel = m_adsr.getNextValue();
        auto oscOutput = 0.0f;
        for (size_t i = 0; i < 2; i++) {
            oscOutput += m_oscillators[i].getNextSample() * adsrLevel * m_currentVelocity * m_oscillatorVolumes[i];
        }

        for (size_t channel = 0; channel < numChannels; channel++) {
            mix.addSample(channel, sample, oscOutput);
        }
    }

    m_vcf.process(mix);

    for (size_t sample = 0; sample < bufferSize; sample++) {
        for (size_t channel = 0; channel < numChannels; channel++) {
            bufferToFill.addSample(channel, sample, mix.getSample(channel, sample));
        }
    }
}

void SynthVoice::noteOn(uint8_t midiNote, uint8_t velocity) noexcept
{
    m_currentNote = midiNote;
    m_currentVelocity = static_cast<float>(velocity) / 127.0f;

    for (auto& osc : m_oscillators) {
        osc.setFrequency(mtof(midiNote));
    }
    
    m_adsr.noteOn();
    m_vcf.noteOn();
}

void SynthVoice::noteOff() noexcept
{
    m_adsr.noteOff();
    m_vcf.noteOff();
}

void SynthVoice::setCutoffFrequency(float cutoffFrequency)
{
    m_vcf.setCutoffFrequency(cutoffFrequency);
}

void SynthVoice::setQ(float q)
{
    m_vcf.setQ(q);
}

...

We now also need to update the main Synth class. It also needs a constructor to pass on the number of channels and expose those public setter functions. In Synth.hpp, we'll add those functions:

...

class Synth : public AudioProcessor
{
public:
    Synth(size_t numChannels);
    ~Synth() override = default;

    ...

    template<Adsr::Phase ParamType> requires (ParamType != Adsr::Phase::Idle)
    void setVcfAdsrParam(float value) noexcept
    {
        for (auto& voice : m_voices) {
            voice.setVcfAdsrParam<ParamType>(value);
        }
    }

    void setCutoffFrequency(float cutoffFrequency);
    void setQ(float q);

    ...
};

#endif

And give those definitions in Synth.cpp:

...

Synth::Synth(size_t numChannels)
    : m_voices{
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
    }
{

}

...

void Synth::setCutoffFrequency(float cutoffFrequency)
{
    for (auto& voice : m_voices) {
        voice.setCutoffFrequency(cutoffFrequency);
    }
}

void Synth::setQ(float q)
{
    for (auto& voice : m_voices) {
        voice.setQ(q);
    }
}

Now that SynthVoice has a parameterized constructor, we need to explicitly fill the array of voices.

Ok, final change before this is playable: now that Synth has a parameterized constructor, we need to call that explicitly in main.cpp. To be totally exception-safe, I replaced the static instances of Synth and RtMidiIn with std::optionals, and called the constructors explicitly as such:

...
#include <optional>
...

static std::optional<Synth> s_synth;
static std::optional<RtMidiIn> s_midiIn;

...

int main([[maybe_unused]] int argc, [[maybe_unused]] const char* argv[])
{
    try {
        s_synth = Synth(2);
    } catch (const std::exception& e) {
        fmt::print(stderr, "Error constructing Synth: {}\n", e.what());
        std::exit(1);
    }

    try {
        s_midiIn.emplace();
    } catch (const std::exception& e) {
        fmt::print(stderr, "Error constructing RtMidiIn: {}\n", e.what());
        std::exit(1);
    }

...

If you also choose to turn these instances into optionals, every call to a member function or variable needs to have . replaced with ->, so for example:

...
    auto numMidiPorts = s_midiIn->getPortCount();
...
    s_synth->setVcfAdsrParam<Adsr::Phase::Attack>(0.5f);
    s_synth->setVcfAdsrParam<Adsr::Phase::Decay>(0.2f);
    s_synth->setVcfAdsrParam<Adsr::Phase::Sustain>(0.3f);
    s_synth->setVcfAdsrParam<Adsr::Phase::Release>(0.2f);
...

I'm also setting up the filter envelope with some initial parameters here along with all the other calls in main(). I also assigned more MIDI CC knobs to the filter parameters up in the audioCallback() function in the same fashion as before, just using the next lot available to me:

...
                        case 24:
                            // scale cf to be 20 - 20000
                            s_synth->setCutoffFrequency((static_cast<float>(value) / 127.0f * 19980.0f) + 20.0f);
                            break;
                        case 25:
                            // scale q to be 1 - 10
                            s_synth->setQ((static_cast<float>(value / 127.0f) * 9.0f) + 1.0f);
                            break;
                        case 26:
                            // max of 5 seconds on timed ADSR params
                            s_synth->setVcfAdsrParam<Adsr::Phase::Attack>(static_cast<float>(value) / 127.0f * 5.0f);
                            break;
                        case 27:
                            s_synth->setVcfAdsrParam<Adsr::Phase::Decay>(static_cast<float>(value) / 127.0f * 5.0f);
                            break;
                        case 28:
                            // scale sustain to be 0 - 1
                            s_synth->setVcfAdsrParam<Adsr::Phase::Sustain>(static_cast<float>(value) / 127.0f);
                            break;
                        case 29:
                            s_synth->setVcfAdsrParam<Adsr::Phase::Release>(static_cast<float>(value) / 127.0f * 5.0f);
                            break;
...

Now, we can add the new files we made to the add_executable() call in CMakeLists.txt:

...
add_executable(
    rocksynth
    src/Audio/AudioBuffer.cpp
    src/main.cpp
    src/Synth/Adsr.cpp
    src/Synth/BiquadFilter.cpp
    src/Synth/Oscillator.cpp
    src/Synth/Synth.cpp
    src/Synth/SynthVoice.cpp
    src/Synth/Vcf.cpp
)
...

Recompile with:

$ mkdir build
$ cmake -B build -S . -DCMAKE_BUILD_TYPE=Release
$ cmake --build build
$ ./build/rocksynth

And we're ready to give it a try!

Pzazz™

I promised some pzazz, and I'm glad to tell you it is time for said pzazz. I thought we'd add a chorus effect on to the final output, since it's a relatively simple effect to implement and it will introduce another core concept of DSP - delay lines.

Delay Lines

A delay line in audio programming is the digital recreation of a tape delay - a roll of tape with a record head and a play head. Audio would be recorded on to a spinning roll of tape, and played back when the tape passes through the play head further along its cycle, causing an echo or "delayed" signal in the playback. In digital audio, since we can use very short delay times, these are used to make all sorts of effects, including choruses, phasers, and flangers. It looks like this:

Diagram of a digital delay line

We increment the record and read indexes every time we record or read, and the time between the two can be varied. To express this in our code, I made DelayLine.hpp and DelayLine.cpp in the Audio subdirectory of the source folder, and they look like this:

#ifndef DELAY_LINE_HPP
#define DELAY_LINE_HPP

#include <cstdint>
#include <cstdlib>

class DelayLine
{
public:
    DelayLine() = default;
    ~DelayLine();

    DelayLine(const DelayLine& other);
    DelayLine(DelayLine&& other) noexcept;

    DelayLine& operator=(const DelayLine& other);
    DelayLine& operator=(DelayLine&& other) noexcept;

    void setMaxDelaySamples(size_t size) noexcept;
    void setDelaySamples(int64_t delay) noexcept;

    void setMaxDelaySeconds(float seconds) noexcept;
    void setDelaySeconds(float seconds) noexcept;

    void prepare(uint32_t sampleRate) noexcept;
    float getNextSample(float inputSample) noexcept;

private:
    size_t m_maxSize{0};
    float* m_data{nullptr};

    float m_sampleRate{};
    int64_t m_delay{0};
    int64_t m_recordIndex{0};
};

#endif
#include "DelayLine.hpp"

#include <fmt/core.h>

#include <cmath>
#include <cstring>

DelayLine::~DelayLine()
{
    delete[] m_data;
}

DelayLine::DelayLine(const DelayLine& other)
    : m_maxSize{other.m_maxSize}
    , m_data{new float[m_maxSize]}
    , m_sampleRate{other.m_sampleRate}
    , m_delay{other.m_delay}
    , m_recordIndex{other.m_recordIndex}
{
    std::memcpy(m_data, other.m_data, sizeof(float) * m_maxSize);
}

DelayLine::DelayLine(DelayLine&& other) noexcept
    : m_maxSize{other.m_maxSize}
    , m_data{other.m_data}
    , m_sampleRate{other.m_sampleRate}
    , m_delay{other.m_delay}
    , m_recordIndex{other.m_recordIndex}
{
    other.m_maxSize = 0;
    other.m_data = nullptr;
}

DelayLine& DelayLine::operator=(const DelayLine& other)
{
    if (this != &other) {
        delete[] m_data;
        m_maxSize = other.m_maxSize;
        m_data = new float[m_maxSize];
        m_sampleRate = other.m_sampleRate;
        m_delay = other.m_delay;
        m_recordIndex = other.m_recordIndex;

        std::memcpy(m_data, other.m_data, sizeof(float) * m_maxSize);
    }

    return *this;
}

DelayLine& DelayLine::operator=(DelayLine&& other) noexcept
{
    if (this != &other) {
        delete[] m_data;
        m_maxSize = other.m_maxSize;
        m_data = other.m_data;
        m_sampleRate = other.m_sampleRate;
        m_delay = other.m_delay;
        m_recordIndex = other.m_recordIndex;

        other.m_maxSize = 0;
        other.m_data = nullptr;
    }

    return *this;
}

void DelayLine::setMaxDelaySamples(size_t size) noexcept
{
    delete[] m_data;
    m_maxSize = size;
    m_data = new float[m_maxSize];
    std::memset(m_data, 0, sizeof(float) * m_maxSize);
}

void DelayLine::setDelaySamples(int64_t delay) noexcept
{
    m_delay = std::fmin(delay, m_maxSize);
}

void DelayLine::setMaxDelaySeconds(float seconds) noexcept
{
    setMaxDelaySamples(static_cast<size_t>(seconds * m_sampleRate));
}

void DelayLine::setDelaySeconds(float seconds) noexcept
{
    setDelaySamples(static_cast<int64_t>(seconds * m_sampleRate));
}

void DelayLine::prepare(uint32_t sampleRate) noexcept
{
    m_sampleRate = sampleRate;
}

float DelayLine::getNextSample(float inputSample) noexcept
{
    if (m_data == nullptr) {
        return 0.0f;
    }

    m_data[m_recordIndex] = inputSample;
    auto readIndex = m_recordIndex - m_delay;
    // wrap around
    if (readIndex < 0) {
        auto spill = m_delay - m_recordIndex;
        readIndex = m_maxSize - 1 - spill;
    }

    if (static_cast<size_t>(++m_recordIndex) == m_maxSize) {
        m_recordIndex = 0;
    }

    return m_data[readIndex];
}

As with some of our other classes that needed destructors, this follows the rule of five. It has a maximum allowed delay time, which is the size of the buffer it holds internally. The getNextSample() call pushes a sample into the delay line and increments the record index, then returns the sample n samples before that, where n is our delay time.

Chorus

A chorus effect is an imitation of what happens when multiple musicians, for example, string players in an orchestra, play the same music but ever so slightly out of sync. The slight deviations in pitch and time cause a full, "shimmering" effect. In its simplest form, we can create this digitally using this layout:

Flow graph of a digital chorus effect

We fill the delay line with incoming samples and vary the delay by which we then read from that delay line with an LFO (low-frequency oscillator). We have a mix between the incoming (dry) and delayed (wet) signals and also feed back some of the delayed signal into the input. I made Chorus.hpp and Chorus.cpp in the Audio subdirectory, and filled them in as follows:

#ifndef CHORUS_HPP
#define CHORUS_HPP

#include "AudioProcessor.hpp"
#include "DelayLine.hpp"
#include "../Synth/Oscillator.hpp"

#include <vector>

class Chorus : public AudioProcessor
{
public:
    Chorus(size_t numChannels);
    ~Chorus() override = default;

    void prepare(uint32_t sampleRate) override;
    void process(AudioBuffer& bufferToFill) override;

    enum Param
    {
        Depth,
        Rate,
        Feedback,
        DryWet,
    };

    template<Param ParamType>
    void setParam(float value) noexcept
    {
        if constexpr (ParamType == Param::Depth) {
            m_depth = value;
        } else if constexpr (ParamType == Param::Rate) {
            m_lfo.setFrequency(value);
        } else if constexpr (ParamType == Param::Feedback) {
            m_feedback = value;
        } else if constexpr (ParamType == Param::DryWet) {
            m_dryWet = value;
        }
    }

private:
    std::vector<DelayLine> m_delayLines;
    std::vector<float> m_heldSamples;
    Oscillator m_lfo;

    float m_depth{0.5f};
    float m_feedback{0.5f};
    float m_dryWet{0.5f};
};

#endif
#include "Chorus.hpp"

static float map(float value, float inMin, float inMax, float outMin, float outMax)
{
    auto inFraction = (value - inMin) / (inMax - inMin);
    return ((outMax - outMin) * inFraction) + outMin;
}

Chorus::Chorus(size_t numChannels)
    : m_delayLines(numChannels)
    , m_heldSamples(numChannels)
{

}

void Chorus::prepare(uint32_t sampleRate)
{
    for (auto& delayLine : m_delayLines) {
        delayLine.prepare(sampleRate);
        delayLine.setMaxDelaySamples(sampleRate);
    }

    m_lfo.prepare(sampleRate);
    m_lfo.setShape(Oscillator::Shape::Sine);
    m_lfo.setFrequency(0.5f);
}

void Chorus::process(AudioBuffer& bufferToFill)
{
    for (size_t sample = 0; sample < bufferToFill.bufferSize(); sample++) {
        auto lfoValue = m_lfo.getNextSample() * m_depth;

        for (size_t channel = 0; channel < bufferToFill.numChannels(); channel++) {
            auto inputSample = bufferToFill.getSample(channel, sample);
            m_delayLines[channel].setDelaySeconds(map(lfoValue, -1.0f, 1.0f, 0.005f, 0.04f));
            auto delayedSample = m_delayLines[channel].getNextSample(inputSample + m_heldSamples[channel] * m_feedback);
            bufferToFill.setSample(channel, sample, inputSample * (1.0f - m_dryWet) + delayedSample * m_dryWet);
            m_heldSamples[channel] = delayedSample;
        }
    }
}

It implements AudioProcessor to deal with audio buffers. We also need to know the amount of channels we're dealing with, so we have a delay line and a held sample (for feedback) for each channel in vectors. The process() function does exactly what the above diagram describes - process the LFO (which we can just use one of our oscillators for), maps its values to a common range for choruses (5ms - 40ms), updates the delay time, records/reads from the delay line, and mixes the signals. We also have a templated setter for the parameters.

We can add an instance of this as a member of our Synth (not on each synth voice, since we can just apply this effect to the final output) and hook it up. I updated the Synth class with the member instance and the public setter exposed in Synth.hpp:

...
#include "../Audio/Chorus.hpp"

...

class Synth : public AudioProcessor
{
    ...

    template<Chorus::Param ParamType>
    void setChorusParam(float value)
    {
        m_chorus.setParam<ParamType>(value);
    }

private:
    Chorus m_chorus;
    ...
};

#endif

And prepare and process it appropriately in Synth.cpp:

#include "Synth.hpp"

Synth::Synth(size_t numChannels)
    : m_voices{
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
        SynthVoice(numChannels),
    }
    , m_chorus(numChannels)
{

}

void Synth::prepare(uint32_t sampleRate)
{
    for (auto& voice : m_voices) {
        voice.prepare(sampleRate);
    }

    m_chorus.prepare(sampleRate);
}

void Synth::process(AudioBuffer& bufferToFill)
{
    for (auto& voice : m_voices) {
        voice.process(bufferToFill);
    }

    m_chorus.process(bufferToFill);
}

...

Assign some MIDI CC knobs to the chorus parameters in audioCallback() in main.cpp:

...
                        case 102:
                            s_synth->setChorusParam<Chorus::Param::Depth>(static_cast<float>(value) / 127.0f);
                            break;
                        case 103:
                            // scale this to be 0.1 - 20
                            s_synth->setChorusParam<Chorus::Param::Rate>((static_cast<float>(value) / 127.0f * 19.9f) + 0.1f);
                            break;
                        case 104:
                            s_synth->setChorusParam<Chorus::Param::Feedback>(static_cast<float>(value) / 127.0f);
                            break;
                        case 105:
                            s_synth->setChorusParam<Chorus::Param::DryWet>(static_cast<float>(value) / 127.0f);
                            break;
...

Worth noting here - on my MIDI keyboard, when going up to the 3rd page of controls the controller numbers jumped from 29 to 102 - the previous article goes through finding out which controller number is assigned to each knob.

Add the new files to the add_executable() call in CMakeLists.txt:

add_executable(
    rocksynth
    src/Audio/AudioBuffer.cpp
    src/Audio/Chorus.cpp
    src/Audio/DelayLine.cpp
    src/main.cpp
    src/Synth/Adsr.cpp
    src/Synth/BiquadFilter.cpp
    src/Synth/Oscillator.cpp
    src/Synth/Synth.cpp
    src/Synth/SynthVoice.cpp
    src/Synth/Vcf.cpp
)

Recompile, and give it a try!

Conclusion and Next Steps

That draws this article series to a close - it's been really fun for me making this thing from scratch and testing out the capabilities of my little ROCK board, and I hope you've had fun too if you've followed along. Audio programming can be a very fun and engaging type of programming since the product of your work is instantly tangible (well, audible). As well as having a cool thing to play with as a result of this article series, I hope it's served as a good piece-by-piece intro to audio programming - we started with the general structure of an audio programming and getting an audio thread running from the OS, programming oscillators from their general definitions, dealing with buffers of audio, handling MIDI events, programming a filter from its equation, delay lines, and a simple chorus implementation.

Other common features of a synthesizer that you could add on would be LFOs on the pitch and cutoff, detuning on the oscillators, ring modulation and cross modulation. There are also many more MIDI signals you could take advantage of - pitch and mod wheels, aftertouch, and preset banks.

One major concept that did not feature in these articles was thread safety. We avoided this by handling our MIDI events on the audio thread in the same audio callback - in a larger program, you might be handling parameter controls on a different UI thread. In this case, you need to ensure that the audio thread is not accessing data that might be in the process of being mutated by the UI thread. This can be avoided by atomic values, mutexes, or spinlocks.

Again, thanks for reading, and leave a comment if you have any questions or difficulties!

I'm a software engineer and third-level teacher specialising in audio and front end. In my spare time, I work on projects related to microcontrollers, electronics, and programming language design.
DesignSpark Electrical Logolinkedin