Polyphonic Custom Filters (scriptnode) how?
-
I have a custom C++ Filter node.
For some reason it doesn't work inside of a polyphonic scriptnode synth!I've noticed that the default filter nodes all have poly modes.
I assume the framework is all there, and that I just need to tap into it? Is this difficult?#pragma once #include <JuceHeader.h> #include "src/ScopedValue.h" namespace project { using namespace juce; using namespace hise; using namespace scriptnode; template <int NV> struct Moog_Filter : public data::base { SNEX_NODE(Moog_Filter); struct MetadataClass { SN_NODE_ID("Moog_Filter"); }; // Node properties static constexpr bool isModNode() { return false; } static constexpr bool isPolyphonic() { return NV > 1; } static constexpr bool hasTail() { return false; } static constexpr bool isSuspendedOnSilence() { return false; } static constexpr int getFixChannelAmount() { return 2; } static constexpr int NumTables = 0; static constexpr int NumSliderPacks = 0; static constexpr int NumAudioFiles = 0; static constexpr int NumFilters = 0; static constexpr int NumDisplayBuffers = 0; // Prepare: Called on init, and when sample rate changes void prepare(PrepareSpecs specs) { float sampleRate = specs.sampleRate; leftChannelEffect.prepare(sampleRate); rightChannelEffect.prepare(sampleRate); // Initialize coefficients with default parameter values leftChannelEffect.updateCoefficients(cutoffFrequency, resonance); rightChannelEffect.updateCoefficients(cutoffFrequency, resonance); } // Reset: Called when the plugin is reloaded void reset() { leftChannelEffect.reset(); rightChannelEffect.reset(); } // Process: Audio blocks enter the script here template <typename ProcessDataType> void process(ProcessDataType& data) { auto& fixData = data.template as<ProcessData<getFixChannelAmount()>>(); auto audioBlock = fixData.toAudioBlock(); // Get pointers to channel data auto* leftChannelData = audioBlock.getChannelPointer(0); auto* rightChannelData = audioBlock.getChannelPointer(1); // Correctly get the number of samples int numSamples = (int)data.getNumSamples(); // Process each channel with the filter leftChannelEffect.process(leftChannelData, numSamples); rightChannelEffect.process(rightChannelData, numSamples); } // AudioEffect class implementing the Moog ladder filter class AudioEffect { public: AudioEffect() = default; void prepare(float sampleRate) { fs = sampleRate; reset(); } void reset() { // Initialize state variables to zero s1 = s2 = s3 = s4 = 0.0f; } void updateCoefficients(float fc, float resonanceValue) { // Limit cutoff frequency to Nyquist frequency float fcClamped = std::clamp(fc, 20.0f, 0.49f * fs); // Pre-warp the cutoff frequency float g = std::tan(MathConstants<float>::pi * fcClamped / fs); // Set the resonance parameter (k) for the Moog ladder filter k = 4.0f * resonanceValue; // Max resonance at k = 4 // Compute filter coefficients G = g / (1.0f + g); // Calculate gain compensation float dbGain = quadratic_curve(resonanceValue, a, b, c); gainCompensation = std::pow(10.0f, dbGain / 20.0f); } void process(float* samples, int numSamples) { for (int i = 0; i < numSamples; ++i) { samples[i] = processSample(samples[i]); } } private: // Moog ladder filter processing variables float fs = 44100.0f; // Sample rate float k = 0.0f; // Resonance parameter float G = 0.0f; // Filter coefficient float gainCompensation = 1.0f; // Gain compensation factor // State variables for each stage float s1 = 0.0f; float s2 = 0.0f; float s3 = 0.0f; float s4 = 0.0f; // Curve fit parameters static constexpr float a = -8.0f; static constexpr float b = 18.2f; static constexpr float c = 1.5f; inline float processSample(float input) { // Input with feedback float u = input - k * s4; // Four cascaded one-pole filters s1 = G * (u - s1) + s1; s2 = G * (s1 - s2) + s2; s3 = G * (s2 - s3) + s3; s4 = G * (s3 - s4) + s4; // Output is the last stage, apply gain compensation return s4 * gainCompensation; } static float quadratic_curve(float x, float a, float b, float c) { return a * x * x + b * x + c; } }; template <int P> void setParameter(double v) { if (P == 0) { cutoffFrequency = static_cast<float>(v); leftChannelEffect.updateCoefficients(cutoffFrequency, resonance); rightChannelEffect.updateCoefficients(cutoffFrequency, resonance); } else if (P == 1) { resonance = static_cast<float>(v); leftChannelEffect.updateCoefficients(cutoffFrequency, resonance); rightChannelEffect.updateCoefficients(cutoffFrequency, resonance); } } // Create parameters on the GUI void createParameters(ParameterDataList& data) { { parameter::data p("Cutoff Frequency", { 20.0, 20000.0, 1.0 }); registerCallback<0>(p); p.setDefaultValue(1000.0); data.add(std::move(p)); } { parameter::data p("Resonance", { 0.0, 1.0, 0.01 }); registerCallback<1>(p); p.setDefaultValue(0.0); data.add(std::move(p)); } } // Interact with external data (e.g., an external buffer) void setExternalData(const ExternalData& data, int index) {} // Handle HISE events: Process MIDI or other events void handleHiseEvent(HiseEvent& e) {} // processFrame: Needed for compiler, does nothing template <typename FrameDataType> void processFrame(FrameDataType& data) {} private: // Create instances of AudioEffect for left and right channels AudioEffect leftChannelEffect; AudioEffect rightChannelEffect; // Filter parameters float cutoffFrequency = 1000.0f; float resonance = 0.0f; }; }
-
-
@griffinboy
bump -
womp womp
-
Your post is a little bit vague for this niche part of HISE, which I think is why you're not getting any replies. You're implying understanding of the workflow, and you're the one who made the tutorials for it, up until which point it was mostly black magic to a lot of people here.
Where does this code go? How would one test this out? Can I just stick it in SNEX?
When importing a RNBO patch, in the RNBO exporter you disable polyphony because HISE lets you tick it when creating a template. Is there something like that for C++?
-
You need to wrap your single filter class into a PolyData container:
struct GriffinBoyFilter { void process(float* data, int numSamples); }; // in the node: template <int NV> struct node { PolyData<GriffinBoyFilter, NV> filters; };
See:
https://docs.hise.dev/scriptnode/snex_api/containers/polydata.html
This basically creates one filter per voice and automatically selects the one that is assigned to the currently rendered voice.
-
Haha yeah whoops. Sorry.
I assumed it would have to be a question for christoph, maybe I should've addressed him directly xD -
Thank you I missed that in the docs!
-
-
Sorry, I'm still not having any luck!
#pragma once #include <JuceHeader.h> #include "src/ScopedValue.h" namespace project { using namespace juce; using namespace hise; using namespace scriptnode; template <int NV> struct Moog_Filter : public data::base { SNEX_NODE(Moog_Filter); struct MetadataClass { SN_NODE_ID("Moog_Filter"); }; // Node properties static constexpr bool isModNode() { return false; } static constexpr bool isPolyphonic() { return NV > 1; } static constexpr bool hasTail() { return false; } static constexpr bool isSuspendedOnSilence() { return false; } static constexpr int getFixChannelAmount() { return 1; } // Mono processing static constexpr int NumTables = 0; static constexpr int NumSliderPacks = 0; static constexpr int NumAudioFiles = 0; static constexpr int NumFilters = 0; static constexpr int NumDisplayBuffers = 0; // Define AudioEffect class here class AudioEffect { public: AudioEffect() = default; void prepare(float sampleRate) { fs = sampleRate; reset(); } void reset() { // Initialize state variables to zero s1 = s2 = s3 = s4 = 0.0f; } void updateCoefficients(float fc, float resonanceValue) { // Limit cutoff frequency to Nyquist frequency float fcClamped = std::clamp(fc, 20.0f, 0.49f * fs); // Pre-warp the cutoff frequency float g = std::tan(MathConstants<float>::pi * fcClamped / fs); // Set the resonance parameter (k) for the Moog ladder filter k = 4.0f * resonanceValue; // Max resonance at k = 4 // Compute filter coefficients G = g / (1.0f + g); // Calculate gain compensation float dbGain = quadratic_curve(resonanceValue, a, b, c); gainCompensation = std::pow(10.0f, dbGain / 20.0f); } void process(float* samples, int numSamples) { for (int i = 0; i < numSamples; ++i) { samples[i] = processSample(samples[i]); } } private: // Moog ladder filter processing variables float fs = 44100.0f; // Sample rate float k = 0.0f; // Resonance parameter float G = 0.0f; // Filter coefficient float gainCompensation = 1.0f; // Gain compensation factor // State variables for each stage float s1 = 0.0f; float s2 = 0.0f; float s3 = 0.0f; float s4 = 0.0f; // Curve fit parameters static constexpr float a = -8.0f; static constexpr float b = 18.2f; static constexpr float c = 1.5f; inline float processSample(float input) { // Input with feedback float u = input - k * s4; // Four cascaded one-pole filters s1 = G * (u - s1) + s1; s2 = G * (s1 - s2) + s2; s3 = G * (s2 - s3) + s3; s4 = G * (s3 - s4) + s4; // Output is the last stage, apply gain compensation return s4 * gainCompensation; } static float quadratic_curve(float x, float a, float b, float c) { return a * x * x + b * x + c; } }; // Wrap AudioEffect in a PolyData container for polyphonic handling PolyData<AudioEffect, NV> leftChannelEffect; // Prepare: Called on init, and when sample rate changes void prepare(PrepareSpecs specs) { // Forward prepare call to PolyData container leftChannelEffect.prepare(specs); } // Reset: Called when the plugin is reloaded void reset() { // Use iterator to reset each voice for (auto& effect : leftChannelEffect) { effect.reset(); } } // Process: Audio blocks enter the script here template <typename ProcessDataType> void process(ProcessDataType& data) { auto& fixData = data.template as<ProcessData<1>>(); // Process only one channel auto audioBlock = fixData.toAudioBlock(); // Get pointer to left channel data auto* leftChannelData = audioBlock.getChannelPointer(0); // Correctly get the number of samples int numSamples = (int)data.getNumSamples(); // Process each voice in the PolyData container for (auto& effect : leftChannelEffect) { effect.process(leftChannelData, numSamples); } } template <int P> void setParameter(double v) { // Use iterator to update coefficients for each voice for (auto& effect : leftChannelEffect) { if (P == 0) { cutoffFrequency = static_cast<float>(v); effect.updateCoefficients(cutoffFrequency, resonance); } else if (P == 1) { resonance = static_cast<float>(v); effect.updateCoefficients(cutoffFrequency, resonance); } } } // Create parameters on the GUI void createParameters(ParameterDataList& data) { { parameter::data p("Cutoff Frequency", { 20.0, 20000.0, 1.0 }); registerCallback<0>(p); p.setDefaultValue(1000.0); data.add(std::move(p)); } { parameter::data p("Resonance", { 0.0, 1.0, 0.01 }); registerCallback<1>(p); p.setDefaultValue(0.0); data.add(std::move(p)); } } // Interact with external data (e.g., an external buffer) void setExternalData(const ExternalData& data, int index) {} // Handle HISE events: Process MIDI or other events void handleHiseEvent(HiseEvent& e) {} // processFrame: Needed for compiler, does nothing template <typename FrameDataType> void processFrame(FrameDataType& data) {} private: // Filter parameters float cutoffFrequency = 1000.0f; float resonance = 0.0f; }; }
The audio samples aren't being affected at all. I don't think I understand how to use polydata.
Looking at it myself I've been confused, and as you can see, AI hasn't been particularly useful either : (
-
-
@griffinboy Might be silly because I'm sure you thought about this 100 times already, but are you sure you're in a Polyphonic Script FX context?
-
Yep haha, thank you though.
I wish I didn't have to ask such questions, but I am slow to understand the Hise source code : (
-
@griffinboy Yep tested and working here
-
@griffinboy Don't be harsh on yourself, no one has publicly advanced on the C++ topic as you did!
I wouldn't have made my first steps without you! -
Ah my code will probably not work when run it has a dependency, I should probably include it in this post.
I might rewrite this post to make it easier for christoph to help me out
I just need to figure out how to create an instance of a class for each voice, and process audio through that -
@griffinboy said in Polyphonic Custom Filters (scriptnode) how?:
Ah my code will probably not work when run it has a dependency
Well I just commented out the dependancy and it kinda works...
I just need to figure out how to create an instance of a class for each voice
But that's what the wrapper does with
NV
, isn't it? -
I don't believe so, or at least not for my node. It was not producing any audible effect, wheras inside a regular scriptfx (non poly) it was working.
I have just realised that I may have solved this before, I wrote a pretty bad sampler a few months ago using ai, and had unwittingly used polydata, which I stole the usage of from the regular sampler inside of hise. I'm going to investigate that now.
-
Code looks OKish on the first glance, but you need to call
updateCoefficients
in your prepare callback too though so that it can initialize the coefficients when the sample rate is set (otherwise it might be zero because the parameters might or might not be set before or after the call). -
I'll try with a more simple effect.
-
@griffinboy Well in my video above it works, and it is just your pasted code, node inserted in a poly FX
@Christoph-Hart noticed that the coeff weren't updated as they should but I got lucky having the filter working straight ahead
-
Okay maybe I should update my version of hise.
I'm very surprised that everything seems to be in order lol.For me it still only works when used in monophonic.
-
@Christoph-Hart Jus noticed that when two+ notes are held, changing the resonance makes the freq to be updated to last note value for all notes. What could be done to prevent this?