ASPiK SDK
Loading...
Searching...
No Matches
Frame, Buffer and Sub-Block Processing

We have finally arrived at the function that will do all of the cool audio processing stuff our plugin was designed for. At this point, you need to make a fairly critical decision about how that processing will occur: by frame, buffer or sub-block.

The ASPiKreator software will setup the processing for you based on your selection from its interface. You may also edit the CMakeLists.txt file directly, modifying the statements in the Plugin Options section of the file, near the top. In this example, I am setting up processing sub-blocks of 64 samples per chanel. Note that if you set PROCESS_FRAMES TRUE, then that overrides all other statements.

# --- Plugin Options ---
set(PROCESS_FRAMES FALSE) # <-- set TRUE or FALSE
set(PROCESS_BUFFERS FALSE) # <-- set TRUE or FALSE
set(PROCESS_BLOCKS TRUE) # <-- set TRUE or FALSE
set(BLOCK_SIZE 64) # <-- numerical, in samples (per channel) if using blocks
###

DAW Buffers

All APIs deliver audio data to the buffer processing function in separated buffers, one for each channel. This is described in great detail in my FX plugin book. If the audio is stereo, then the pluign receives a pair of pointers, one for each buffer. The plugin uses these pointers to setup and execute the audio processing for FX, or audio rendering for software synthesizers.

Audio Frames

If we assemble one audio sample from each channel-buffer for each sample period, we create a frame. So a set of channel buffers containing M samples each would be broken into M frames. A stereo input frame would consist of an array that contains two audio samples, one for the left channel and one for the right. We might indicate that as {left, right}. A frame of surround sound 5.1 audio data is an array of six audio samples, one from each channel, organized in the following manner: {left, right, center, LFE, left surround, right surround} where LFE stands for Low Frequency Effects or the sub-woofer channel.

Frame procssing is inherently easier to understand, so people who are new to plugin programming often prefer it, at least in the beginning steps while learning. But, there are some algoriths that naturally lend themselves to frame processing directly. An example of this might be a stereo ping-pong delay plugin where left and right channel data are both needed on a sample-by-sample basis. Another example is a stereo-linked compressor, whose side-chain information is derived from both left and right inputs in a way such that the left channel cannot be processed without the right channel, and vice-versa. Note that these may also be processed with buffers or sub-blocks, but you will need to modify the code to operate on multiple buffers or sub-block channels at once.

When you process audio frames, you fill in the corresponding PluginCore function (details follow):

virtual bool processAudioFrame(ProcessFrameInfo& processFrameInfo)

Entire Buffers

The most efficient processing occurs when you process buffers as you are operating directly on the DAW buffers themselves, rather then smaller frame-based buffers that are setup for the frame processing function – these require an audio copying mechanism to move the data into and out of the DAW buffers.

  • Processing Independent Channels: in this case you will loop over each channel-buffer and process them as un-related data; typically you use an outer for( ) loop on the channels, and an inner for( ) loop on the audio samples in each buffer
  • Processing Inter-connected Channels: when your algorithm requres individual samples from each channel, you will usually use a for( ) loop over the number of samples in the buffer, then use an algorthm that combines samples from each channel buffer in succession, processing them, then writing them to the output buffers in the same succession order

You have two choices for buffer processing - you may alter override and implement the base class function:

virtual bool processAudioBuffers(ProcessBufferInfo& processBufferInfo)

Or, you may simply use the sub-block processing function (described below) and treat each block as a complete buffer. You will still be operating directly on the DAW buffers, so this is very nearly as CPU efficient as buffer processing, only incurring a function call, that uses a const reference as an argument so there is no local copying (i.e. its fast). I use this method when I do buffer processing.

Sub-Blocks

When you process sub-blocks, you are operating on smaller pieces of the DAW buffers, but the operation is still direct with no audio sample copies being made. You define the size of these blocks, indicating the number of samples per channel in ech sub-block. The processing is the same for buffers, only your loops will be smaller. To help you with both Sub-Block and Buffer processing, the PluginCore comes pre-loaded with two functions, one for FX and the other for synths, that you may use as templates or starting points. When you process sub-blocks, the follwing PluginCore function is called; here is the code that is built-into each new plugin project:

// --- implement block processing
{
// --- FX or Synth Render
// call your block processing function here
// --- Synth
if (getPluginType() == kSynthPlugin)
renderSynthSilence(processBlockInfo);
// --- or FX
else if (getPluginType() == kFXPlugin)
renderFXPassThrough(processBlockInfo);
return true;
}
//
bool renderSynthSilence(ProcessBlockInfo &blockInfo)
renders a block of silence (all 0.0 values) as an example your synth code would render the synth usin...
Definition: plugincore.cpp:343
static pluginType getPluginType()
Definition: plugincore.cpp:747
virtual bool processAudioBlock(ProcessBlockInfo &processBlockInfo)
block or buffer-processing method
Definition: plugincore.cpp:315
bool renderFXPassThrough(ProcessBlockInfo &blockInfo)
Renders pass-through code as an example; replace with meaningful DSP for audio goodness.
Definition: plugincore.cpp:383
Structure for setting up block processing.
Definition: pluginstructures.h:1063

Here is the code for the two example helper functions.

  • Synth: Notice how the MIDI events are processed at the top of the function, then the synth output is rendered. See my Synth book for more details on this paradigm, and MIDI Message Processing below
  • FX: the input samples are simply copied to the output sample locations in the buffer; MIDI is sample accurate and MIDI messages will be inserted as the samples are processed (see MIDI Message Processing)
//
// --- Synth Silence
//
{
// --- process all MIDI events in this block (same as SynthLab)
uint32_t midiEvents = blockInfo.getMidiEventCount();
for (uint32_t i = 0; i < midiEvents; i++)
{
// --- get the event
midiEvent event = *blockInfo.getMidiEvent(i);
// --- do something with it...
// myMIDIMessageHandler(event); // <-- you write this
}
// --- render a block of audio; here it is silence but in your synth
// it will likely be dependent on the MIDI processing you just did above
for (uint32_t sample = blockInfo.blockStartIndex, i = 0;
sample < blockInfo.blockStartIndex + blockInfo.blockSize;
sample++, i++)
{
// --- write outputs
for (uint32_t channel = 0; channel < blockInfo.numAudioOutChannels; channel++)
{
// --- silence (or, your synthesized block of samples)
blockInfo.outputs[channel][sample] = 0.0;
}
}
return true;
}
//
//
// --- FX Pass-Through
//
{
// --- block processing -- write to outputs
for (uint32_t sample = blockInfo.blockStartIndex, i = 0;
sample < blockInfo.blockStartIndex + blockInfo.blockSize;
sample++, i++)
{
// --- handles multiple channels, but up to you for bookkeeping
for (uint32_t channel = 0; channel < blockInfo.numAudioOutChannels; channel++)
{
// --- pass through code, or your processed FX version
blockInfo.outputs[channel][sample] = blockInfo.inputs[channel][sample];
}
}
return true;
}
//
//
uint32_t blockSize
size of this block
Definition: pluginstructures.h:1077
float ** inputs
audio input buffers
Definition: pluginstructures.h:1066
float ** outputs
audio output buffers
Definition: pluginstructures.h:1067
uint32_t numAudioOutChannels
audio input channel count
Definition: pluginstructures.h:1072
uint32_t blockStartIndex
start
Definition: pluginstructures.h:1078
Information about a MIDI event.
Definition: pluginstructures.h:562

Changing the Processing Paradigm

If you change your mind later or you want to experiment with the different processing types, you do NOT need to re-run CMake or use the ASPiKreator to make another project; instead you may edit the plugindescription.h file directly (this may only be done after the the project has been created with CMake). You will find the boolean and uint32_t variables that define the processing. You set them as follows:

Frame Processing

  • Set kProcessFrames to true
  • kBlockSize will be ignored
//
// --- Plugin Options
const bool kProcessFrames = true;
const uint32_t kBlockSize = DEFAULT_AUDIO_BLOCK_SIZE;
//

Buffer Processing

  • Set kProcessFrames to false
  • Set kBlockSize to WANT_WHOLE_BUFFER (which is a constant defined as 0)
//
// --- Plugin Options
const bool kProcessFrames = false;
const uint32_t kBlockSize = WANT_WHOLE_BUFFER;
//

Sub-Block Processing

  • Set kProcessFrames to false
  • Set kBlockSize to the value you want:
  • the default size DEFAULT_AUDIO_BLOCK_SIZE, defined as 64 samples (there is a reason for this value that is tied to software synthesis and explained in my Synth book)
  • another block size, e.g. 512 samples

To setup sub-block processing with the default buffer size:

//
// --- Plugin Options
const bool kProcessFrames = false;
const uint32_t kBlockSize = DEFAULT_AUDIO_BLOCK_SIZE;
//

To setup sub-block processing with 512 samples (as an example):

//
// --- Plugin Options
const bool kProcessFrames = false;
const uint32_t kBlockSize = 512;
//