Frequently Asked Questions (FAQ)

  • Communication
  • Processing
  • GUI-Editor
  • Compatibility with VST2.x or VST1
  • Persistence
  • Miscellaneous



  • Communication


    Q: How should I communicate between the 'Processing' and the 'User Interface'?

    With the term 'Processing' we mean the code implementing the IAudioProcessor interface: kVstAudioEffectClass component), and with 'User Interface' the editor part (implementing the IEditController: kVstComponentControllerClass component).

  • If you need to communicate the changes of parameters to the user interface, such as metering changes and peaks, you need to define the parameter as an exported type. The parameter then is associated with an ID. In the process function you can inform the host of changes by using the outputParameterChanges (from ProcessData). You adds the parameter (ID) to a list that will be used by the host to send them back to the user interface at the correct time.


  • If you should need to exchange more data than just parameter changes, such as tempo, sample rate, or any other data resulting from your processing, you can use the IMessage interface (see AGain example). However, you need to be careful and send the data from a 'timer' thread and not directly from the process function, for example, when sending from a 'process' call.

  • Q: I want to implement an audio meter in my user interface. How do I do this?

    See How should I communicate between the 'Processing' and the 'User Interface'?


    Q: How does the host send automation data to my VST3 plug-in?

    Automation data is sent to the audio processing method as part of the data passed as a parameter to the IAudioProcessor::process (processData) method.

    		IAudioProcessor::process (processData)
    		{
    			IParameterChanges* paramChanges = processData.inputParameterChanges;
    			//...
    		}
    	
    Automation data can be considered as a list of parameter changes just as if the user interface code was calling IEditController::setParamNormalized (parameterTag, newValue). Check the AGain example to see how it could be implemented.



    Processing


    Q: How does Audio Processing Bypass work?

    In order to implement audio process bypassing, the plug-in can export a parameter which is additionally and exclusively flagged as having the attribute kIsBypass. When the user activates the plug-in bypass in the host, like all parameter changes, this is sent as part of the parameter data passed to the IAudioProcessor::process method.

    The implementation of the bypass feature is entirely the responsibility of the plug-in: The IAudioProcessor::process method will continue to be called. The plug-in must take care of artifact-free switching (ramping, parallel processing or algorithm changes) and must also provide a delayed action if your plug-in has a latency.


    Q: Must the host deliver valid initialized Audio buffers if the associated bus is deactivated?

    In a correctly implemented host, if a input or output bus exists in the host, has become disconnected from the plug-in, the plug-in will receive disconnection information.

    Additionally, a plug-in with a disconnected input bus will continue to receive default silence buffers, just as a plug-in with a disconnected output bus will continue to receive default nirvana buffers,


    Q: Can the max. sample block size change while the plug-in is processing?

    The max. sample block size (maxSamplesPerBlock) can change during the lifetime of a plugin, but NOT while the audio component is active. Therefore max. sample block size (maxSamplesPerBlock) can never change during or between processing calls while the plug-in is active.

    If the host changes the max. sample block size it specifically calls the following.

    	AudioComponent::setActive (false);
    	AudioComponent::setupProcessing (...);
    	AudioComponent::setActive (true);
    
    Note that the ProcessData->numSamples which indicates how many samples are used in a process call could change from call to call, but never bigger than the maxSamplesPerBlock.


    Q: Can the sample rate change while the plug-in is processing?

    No. Same as Can the max. sample block size change while the plug-in is processing?


    Q: Could the host call the process function without Audio buffers?

    Yes, the host could call IAudioProcessor::process without buffers (numInputs and numOutputs are zeroed), in order to flush parameters (from host to plug-in).


    Q: What is a Side-chain?

    In audio applications, some plug-ins allow for a secondary signal to be made available to the plug-in and act as a controller of one or more parameters in the processing.
    Such a signal is commonly called a Side-chain Signal or Side-chain Input.

    Examples:

  • If a recorded kick drum is considered well played, but the recording of the bass player's part shows that he regularly plays slightly ahead of the kick drum, a plug-in with a 'Gating' function on the bass part could use the kick drum signal as a side-chain to 'trim' the bass part precisely to that of the kick.


  • Another application is to automatically lower the level of a musical background when another signal, such as a voice, reaches a certain level. In this case a plug-in with a "Ducking' function would be used - where the main musical signal is reduced while the voice signal is applied to the side-chain input.


  • A delay's mix parameter could be controlled by a side-chain input signal - to make the amount of delay signal proportional to the level of another.


  • The side-chain could be used as an additional modulation source instead of cyclic forms of modulation.


  • From the plug-in perspective, side-chain inputs and/or outputs are additional inputs and outputs which can be enabled or disabled by the host.
    The host (if supported) will provide to the user a way to route some signal paths to these side-chain inputs or from side-chain outputs to others signal inputs.


    Q: How can I implement a Side-chain path into my plug-in?

    In AudioEffect::initialize (FUnknown* context) you must add the required bus- and speaker configuration of your plug-in. For example, if your plug-in works on one input and one output bus, both stereo, the appropriate code snippet would look like this:

    		addAudioInput  (USTRING ("Stereo In"),  SpeakerArr::kStereo);
    		addAudioOutput (USTRING ("Stereo Out"), SpeakerArr::kStereo);
    		
    In addition, adding a stereo side chain bus would look like this:
    		addAudioInput  (USTRING ("Aux In"),  SpeakerArr::kStereo, kAux);
    	


    Q: My plug-in is capable of processing all possible channel configurations. What type of speaker arrangement should I select when adding busses?

    Take the configuration your plug-in is most likely to be used with. For a 5.1-surround setup that would be the following:

    	addAudioInput  (USTRING ("Surround In"),  SpeakerArr::k51);
    	addAudioOutput (USTRING ("Surround Out"), SpeakerArr::k51);
    	
    But when the host calls:
    	AudioEffect::setBusArrangements (SpeakerArrangement* inputs, int32 numIns, SpeakerArrangement* outputs, int32 numOuts),
    	
    the host is informing your plug-in of the current speaker arrangement of the track it was selected in. You should return kResultOk, in the case you accept this arrangement, or kResultFalse, in case you do not. Note, if you reject a setBusArrangements and return kResultFalse, the host calls:
    	AudioEffect::getBusArrangement (BusDirection dir, int32 busIndex, SpeakerArrangement& arrangement)
    	
    where you have the chance to give the parameter 'arrangement' the value of the speaker arrangement your plug-in does accept for this given bus.


    Q: How are speaker arrangement settings handled for FX plug-ins?

    After instantiation of the plug-in, the host calls setSpeakerArrangement with a default configuration (depending on the current channel configuration), if the plug-in accepts it (by returning kResultOK), it will continue with this configuration.
    If not, the host asks the plug-in for it wanted configuration with AudioEffect::getBusArrangement.


    Q: My plug-in has mono input and stereo output. How does VST3 handle this ?

    There are two ways to instantiate a plug-in like this.

  • In AudioEffect::initialize (FUnknown* context) you add one mono and one stereo bus.
    		addAudioInput  (USTRING ("Mono In"),  SpeakerArr::kMono);
    		addAudioOutput (USTRING ("Stereo Out"), SpeakerArr::kStereo);
    	
    In case of Cubase/Nuendo being the host, the plug-in, after being inserted into a stereo track, gets the left channel of the stereo input signal as its mono input. From this signal you can create a stereo output signal

  • In AudioEffect::initialize (FUnknown* context) you add one stereo input and one stereo output bus.
    	addAudioInput  (USTRING ("Stereo In"),  SpeakerArr::kStereo);
    	addAudioOutput (USTRING ("Stereo Out"), SpeakerArr::kStereo);
    	
  • For processing, the algorithm of your plug-in takes the left channel only, or creates a new mono input signal, by adding the samples of the left and right channels.



    GUI-Editor


    Q: The host doesn't open my plug-in UI, why?

    If you are not using VSTGUI, please check that you provide the correct object derived from EditorView or CPlugInView and that you overwrite the function isPlatformTypeSupported ().



    Compatibility with VST2.x or VST1


    Q: How can I update my VST2 version of my plug-in to a VST3 version and be sure that Cubase will load it instead of my old one?

    You have to provide a special UID for your kVstAudioEffectClass and kVstComponentControllerClass components, based on its VST2 UniqueID (4 characters) and its plug-in name like this:

    static void convertVST2UID_To_FUID (FUID& newOne, int32 myVST2UID_4Chars, const char* pluginName, bool forControllerUID = false)
    {
    	char uidString[33];
    	
    	int32 vstfxid;
    	if (forControllerUID)
    		vstfxid = (('V' << 16) | ('S' << 8) | 'E');
    	else
    		vstfxid = (('V' << 16) | ('S' << 8) | 'T');
    	
    	char vstfxidStr[7] = {0};
    	sprintf (vstfxidStr, "%06X", vstfxid);
    	
    	char uidStr[9] = {0};
    	sprintf (uidStr, "%08X", myVST2UID_4Chars);
    
    	strcpy (uidString, vstfxidStr);
    	strcat (uidString, uidStr);
    			
    	char nameidStr[3] = {0};
    	size_t len = strlen (pluginName);
    	
    	// !!!the pluginName has to be lower case!!!!
    	for (uint16 i = 0; i <= 8; i++)
    	{
    		uint8 c = i < len ? pluginName[i] : 0;
    		sprintf (nameidStr, "%02X", c);
    		strcat (uidString, nameidStr);
    	}
    	newOne.fromString (uidString);
    }


    Q: In VST2 the editor was able to access the processing part, named effect, directly. How can I do this in VST3?

    You can not and more importantly must not do this. The processing part and user interface part communicate via a messaging system. See How should I communicate between the 'Processing' and the 'User Interface'? for details.


    Q: Does VST3 implement methods like beginEdit and endEdit known from VST2?

    Yes. For writing automation data call in controller class:

    	beginEdit (parameterTag);
    	performEdit (parameterTag, valueNormalized);
    	endEdit (parameterTag);
    	



    Persistence


    Q: How does persistence work?

    An instantiated plug-in often has state information that must be saved in order to properly re-instantiate that plug-in at a later time. A VST3 plug-in has two states which are saved and reloaded: its component state and its controller state.

    The sequencing is the following for saving:

  • component->getState (compState)
  • controller->getState (ctrlState)


  • The sequencing is the following for loading:
  • component->setState (compState)
  • controller->setComponentState (compState)
  • controller->setState (ctrlState)


  • In this last sequencing you can see that the controller part will receive the component state, this allows the 2 parts to synchronize its states.


    Q: What's the difference between EditController::setComponentState and EditController::setState?

    After a preset is loaded, the host calls EditController::setComponentState and AudioEffect::setState. Both delivering the same information.
    EditController::setState is called by the host so that the plug-in is able to update it's controller dependent parameters, e.g. position of scroll bars. Prior to this, there should have been an EditController::getState call by the host, where the plug-in writes these very parameters into the stream.
    See
    How does persistence work? for details.



    Miscellaneous


    Q: How is a normalized value converted to a discrete integer value in VST3?

    If you have a parameter with, let's say, three stages (0,1,2), you could convert the normalized value like this:

    	int32 plainValue;
    
    	if (normalizedValue < 0.3333333)
    	    plainValue = 0;
    	else if (normalizedValue < 0.6666666)
    	    plainValue = 1;
    	else
    	    plainValue = 2;		
    	
    Or in more general terms:
    	int32 maxPlain = 2;
    	int32 minPlain = 0;
    	int32 plainValue;
    
    	int32 nSteps = maxPlain - minPlain + 1;
    
    	float step = 1.f / (float)nSteps;
    
    	if (normalizedValue == 1.)
    	    plainValue = maxPlain;
    	else
    	{
    	    int32 i;
    	    for (i = 0; i < nSteps; i++)
    	    {
    	        if (normalizedValue < (i + 1) * step)
    	        {
    	            plainValue = i + minPlain;				  
    	            break;
    	        }
    	    }
    	}
    This is true for minPlain values being smaller than maxPlain values (minPlain < maxPlain).


    Q: What is the return value tresult?

    Almost all VST3 interfaces return a tresult value. This integer value allows to return some errors or success states (not only a boolean value: true or false).
    In funknown.h are defined the different possible values.
    Be careful when checking this return value because a success return is kResultOk which have the integer value 0:

    	// this is WRONG!!!!!
    	if (component->setActive (true))
    	{
    	}
    
    	// this is CORRECT!!!!!
    	if (component->setActive (true) == kResultOK)
    	{
    	}
    	// or
    	// this is CORRECT too!!!!!
    	if (component->setActive (true) != kResultOK)
    	{
    		// error message....
    	}
    

    Empty

    Copyright ©2008 Steinberg Media Technologies. All Rights Reserved.