ASPiK SDK
|
The processAudioFrame function might be the most important function we implement since it is where we will do the DSP magic for the plugin. If you want to process buffers, then you should still make note of this section, as some of the paradigms are similar. The frame processing function is shown here with its function parameter passed as a ProcessFrameInfo structure.
Let's examine that function argument's members:
At the top of the structure are the members that are the audio input and output pointers. Each frame of data consists of one sample from each channel packed into an array of float values. Float values are chosen because they are common across all APIs, thought VST3 does allow for processing doubles as well. The arrays are passed-by-pointer into the frame processing function. Just below these array pointer declarations are the unsigned integers (uint32_t) that tell you how many channels there are in a given frame array and these are easy to interpret: numInputChannels is the array size of the audioInputFrame array while numOutputChannels is the array size of the audioOutputFrame array. The audio samples are packed into the arrays according to the standardized audio file formats. These are published on the internet, and you can also find a very concise description of the channel packing formats in the vstspeaker.h file that is inside the VST3 SDK.
The audioInputFrame and audioOutputFrame arrays hold the frames for the audio input and output busses. The auxAudioInputFrame array holds audio data that is coming from a side-chain. We will use this information in some of our side-chain-able plugins in later chapters. The auxAudioOutputFrame is reserved for future use as plugin APIs begin to implement auxiliary data channels (none do it so far).
The ChannelIOConfig variables describe more information about the audio I/O configurations than we can with the channel counts alone. The ChannelIOConfig structure holds two member variables, one that describes the input and the other that describes the output channel configurations encoded as channelFormat enumerations. You can find the channelFormat declaration in the pluginstructures.h file. These are especially important for multi-channel plugins. For example, you will see that there are multiple channel formats for a given channel count; kCF7p1Sony, kCF7p1DTS, and kCF7p1Proximity all specify eight total channels (known as “7.1”) yet there are three different channel format specifications to deal with. In addition, your plugin may feature very different configurations from input to output; a fold-down plugin may accept a 7.1 input and deliver a plain stereo output.
Following the channel I/O information is the frame counter variable. This will indicate which frame of the buffer is being processed. We will pass this variable to the MIDI event queue firing mechanism. The HostInfo structure holds information from the host about the current buffer of data being processed, including the current session Beats Per Minute (BPM), the time signature numerator and denominator, and the audio track time at which the buffer occurs. There is even more information available such as musical bar position, but these are dependent on the API as well as the DAW that implements the specification; many of these host values are designated as optional and many DAWs ignore them. The last item in the array is a pointer to a MIDI event queue that holds MIDI events that were gathered at the same time as the audio buffer and time-stamped with index values that correspond to locations in the audio buffer. The entire MIDI queue-ing operation is handled in the ASPiK plugin shell code and you don't need to worry about it. As MIDI events are fired, your MIDI handlers will be called in sequence and in a thread-safe manner.