HISE Logo Forum
    • Categories
    • Register
    • Login
    1. HISE
    2. griffinboy
    3. Best
    • Profile
    • Following 8
    • Followers 6
    • Topics 106
    • Posts 781
    • Groups 1

    Posts

    Recent Best Controversial
    • [Free Dsp] Analog Filter (24dB/oct)

      C++ analog filter node based on this paper:
      https://www.researchgate.net/publication/344876889_Moog_Ladder_Filter_Generalizations_Based_on_State_Variable_Filters

      To use it in a Hise project, open your project folder and place the script file (Griffin_LadderFilter.h) under:
      DspNetworks > ThirdParty

      e42f5f06-2019-435a-b3aa-53af5d916733-image.png

      Then open your Hise project and compile dsp networks as dll

      9cff98b4-dfb6-4981-8446-bb2e2b0deb05-image.png

      Now press OK, and you will be able to use the Node:

      566a05d5-188e-438f-9f83-cff680d55493-image.png

      The node should appear under the 'project' category when you open the scripnode node browser.

      Download the script here:

      Polyphonic (stereo):
      https://1drv.ms/u/c/6c19b197e5968b36/EYVQsJa9FlpKldok2ThVUwMBVf3KnemfEeccDyzASGbGJw?e=XNq1PU

      Monophonic (stereo):
      https://1drv.ms/u/c/6c19b197e5968b36/EcGw46UnqYJMk1cu3bH2QZQBSWTXAOL6ggsusNrF-LrtZQ?e=FvMeGr

      24dB / Octave Filter Response:

      9537ba2d-197d-435b-b862-998dbf3ae7bd-image.png

      posted in ScriptNode
      griffinboyG
      griffinboy
    • [Tutorial] How to create a c++ custom node

      How to create a custom node, as promised.
      I apologise in advance for the awful quality video (quite dry and boring)! But the good news is that I have some nicer quality ones in the pipeline. However I thought I should get this out as soon as possible since some people were waiting on it.
      Hope it helps.

      Windows only, I'll do a mac tutorial later.

      Template v1.1:

      // ==========================| NodeTemplate v1.0 : by Griffinboy |==========================
      
      #pragma once
      #include <JuceHeader.h>
      
      namespace project
      {
          using namespace juce;
          using namespace hise;
          using namespace scriptnode;
      
          template <int NV>
          // --- Replace with Name 
          struct ExternalNodeTemplate : public data::base
          {   // ------ Replace with Name 
              SNEX_NODE(ExternalNodeTemplate);
      
              struct MetadataClass
              { // --------- Replace with "Name"
                  SN_NODE_ID("ExternalNodeTemplate");
              };
      
              // ==========================| Node Properties |==========================
              static constexpr bool isModNode() { return false; }
              static constexpr bool isPolyphonic() { return NV > 1; }
              static constexpr bool hasTail() { return false; }
              static constexpr bool isSuspendedOnSilence() { return false; }
              static constexpr int getFixChannelAmount() { return 2; }
      
              static constexpr int NumTables = 0;
              static constexpr int NumSliderPacks = 0;
              static constexpr int NumAudioFiles = 0;
              static constexpr int NumFilters = 0;
              static constexpr int NumDisplayBuffers = 0;
      
              // ==========================| Global Variables |==========================
      
              // ==========================| Prepare |==========================
              // Called on init, and when sample rate changes
              void prepare(PrepareSpecs specs)
              {
                  float sampleRate = specs.sampleRate;
                  float numChannels = specs.numChannels;
      
                  leftChannelEffect.prepare(sampleRate, 5.0); // 5ms smoothing on the gain parameter
                  rightChannelEffect.prepare(sampleRate, 5.0);
              }
      
              // ==========================| Reset |==========================
              // Called when the plugin is reloaded
              void reset() {}
      
              // ==========================| Process |==========================
              // Blocks of audio enter the script here
              template <typename ProcessDataType>
              void process(ProcessDataType& data)
              {
                  // Convert the audio data to a fixed-channel format for efficient processing
                  auto& fixData = data.template as<ProcessData<getFixChannelAmount()>>();
      
      			// Get pointers to each channel's audio data
                  // Changes to these variables will now directly modify the original audio buffer
                  auto audioBlock = fixData.toAudioBlock();
                  auto* leftChannelData = audioBlock.getChannelPointer(0);
                  auto* rightChannelData = audioBlock.getChannelPointer(1);
      
                  // Get the number of samples (one channel) for this block
                  int numSamples = data.getNumSamples();
      
                  // Pass each channel's audio to the appropriate AudioEffect class
                  leftChannelEffect.process(leftChannelData, numSamples);
                  rightChannelEffect.process(rightChannelData, numSamples);
              }
      
              // ==========================| AudioEffect Class |==========================
              class AudioEffect
              {
              public:
                  AudioEffect(float initialGain = 1.0f)
                  {
                      smoothGain.set(initialGain); // Initialize sfloat with initial gain
                  }
      
                  void prepare(double sampleRate, double timeInMilliseconds)
                  {
                      smoothGain.prepare(sampleRate, timeInMilliseconds);
                  }
      
                  void process(float* samples, int numSamples)
                  {
                      for (int i = 0; i < numSamples; ++i)
                      {
                          samples[i] *= smoothGain.advance(); // Apply the gain using the smoothed parameter
                      }
                  }
      
                  void updateGain(float newGain)
                  {
                      smoothGain.set(newGain);
                  }
      
              private:
                  sfloat smoothGain; // Declare smoothGain variable, using sfloat type for smoothing
              };
      
              // ==========================| Set Parameter |==========================
              template <int P>
              void setParameter(double v)
              {
                  if (P == 0)
                      leftChannelEffect.updateGain(static_cast<float>(v)); // Update gain for left channel
                  else if (P == 1)
                      rightChannelEffect.updateGain(static_cast<float>(v)); // Update gain for right channel
              }
      
              // ==========================| Create Parameters |==========================
              void createParameters(ParameterDataList& data)
              {
      
                  {   //                             { min, max,  step }                 { id } 
                      parameter::data p("Left Gain", { 0.1, 2.0, 0.01 }); registerCallback<0>(p);
                      p.setDefaultValue(1.0);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Right Gain", { 0.1, 2.0, 0.01 });
                      registerCallback<1>(p);
                      p.setDefaultValue(1.0);
                      data.add(std::move(p));
                  }
              }
      
              // ==========================| External Data |==========================
              void setExternalData(const ExternalData& data, int index) {}
      
              // ==========================| Handle HISE Event |==========================
              void handleHiseEvent(HiseEvent& e) {}
      
              // ==========================| Modulation Slot |==========================
              // ( first enable isModNode() at the top of script ) 
              /*
                  ModValue modValue;
                  int handleModulation(double& value)
                  {
                      return modValue.getChangedValue(value);
                  }
                  // Creates callback so that altering the 'modValue' var elsewhere will now update the mod slot
                  modValue.setModValue(0);
              */
      
              // processFrame: Needed for compiler, does nothing
              template <typename FrameDataType>
              void processFrame(FrameDataType& data) {}
      
          private:
              AudioEffect leftChannelEffect;
              AudioEffect rightChannelEffect;
          };
      }
      
      

      14434c25-0fb6-4abd-bcc3-10cdc249cdd1-image.png

      posted in Blog Entries
      griffinboyG
      griffinboy
    • [Free Dsp] Oberheim-8 Analog Filter

      Polyphonic 12db/oct Filter with keytracking.

      Created from a linear (non saturating) analysis of the OB8 lowpass.
      It's not a match for high resonance settings, but it has good intuition at low resonance. You are welcome to tune it further by modifying the code.

      CPU usage: ≈1%

      27f9a3b7-c78b-4610-b029-965bb2b4da1d-image.png

      Instructions for setup included inside the script.
      The "Eigen" header library is required:
      https://github.com/PX4/eigen

      Download the Filter Script here:
      https://1drv.ms/u/c/6c19b197e5968b36/ER8vuJpevI9Htt_WY8Mqs8sBBJ15n8JLf-MJdSNgwjZX5g?e=zXd7gZ

      Raw Code:

      #pragma once
      #include <JuceHeader.h>
      #include <cmath>
      #include "src\eigen-master\Eigen\Dense"
      
      
      /**
       
      ==============| by GriffinBoy (2025) |==============
      
      This node implements an OB8 2-SVF (State-Variable Filter) with global feedback
      using the Delay-Free method as described in DAFx-2020.
      
      Features:
      - Parameter smoothing via Juce's SmoothedValue class
      - Bilinear (TPT) transform implementation
      - Optimized 4x4 matrix inversion using the Eigen library
      - Recalculates matrices only when parameters change (dirty flags)
      - Keytracking support for cutoff frequency
      - (rough) Resonance compensation for frequency shift
      - (rough) Gain compensation for resonance control
      
      Integration Steps for HISE:
      1. Eigen Setup: Download and place the Eigen library under:
         ProjectName/DspNetworks/ThirdParty/src/eigen-master
      2. Create a 3rd party C++ node in HISE named "Griffin_OBFilter"
      3. Compile the initial DLL in HISE using "Compile dsp networks as .dll"
      4. Replace the generated "Griffin_OBFilter.h" (in ProjectName/DspNetworks/ThirdParty)
         with this header file you are reading now
      5. Re-compile the final DLL in HISE using "Compile dsp networks as .dll" 
      
      Continuous-Time System Definition:
        x1'(t) = -2*r*x1 - x2 - kf*x4 + input
        x2'(t) = x1
        x3'(t) = -2*r*x3 - x4 + x2
        x4'(t) = x3
      (Output is taken from state x4)
      
      Bilinear Transform:
        M = I - g*A,   N = I + g*A, where g = 2 * tan(pi * fc / fs)
        (M is inverted using Eigen's optimized fixed-size matrix inversion)
      
      Optimizations:
      - Precomputed reciprocals and constants
      - Eigen fixed-size matrices for 4x4 operations
      - Dirty flags for matrix recalculation only when parameters change
      
      Author: GriffinBoy (2025)
      License: UwU 
      */
      
      
      
      namespace project
      {
          using namespace juce;
          using namespace hise;
          using namespace scriptnode;
      
          template <int NV> // NV = Number of Voices
          struct Griffin_OBFilter : public data::base
          {
              SNEX_NODE(Griffin_OBFilter);
      
              struct MetadataClass { SN_NODE_ID("Griffin_OBFilter"); };
      
              static constexpr bool isModNode() { return false; }
              static constexpr bool isPolyphonic() { return NV > 1; }
              static constexpr bool hasTail() { return false; }
              static constexpr bool isSuspendedOnSilence() { return false; }
              static constexpr int  getFixChannelAmount() { return 2; }
      
              static constexpr int  NumTables = 0;
              static constexpr int  NumSliderPacks = 0;
              static constexpr int  NumAudioFiles = 0;
              static constexpr int  NumFilters = 0;
              static constexpr int  NumDisplayBuffers = 0;
      
              //=========================================================================
              // Parameters (Raw Values)
              //=========================================================================
              float cutoffFrequency = 1000.0f;
              float resonance = 0.0f;    // Range: [0, 35]
              float keytrackAmount = 1.0f;
              float sampleRate = 44100.0f;
      
              //=========================================================================
              // Smoothing Objects for Per-Sample Smoothing
              //=========================================================================
              SmoothedValue<float> cutoffSmooth;
              SmoothedValue<float> resonanceSmooth;
              SmoothedValue<float> keytrackSmooth;
      
              //=========================================================================
              // Prepare Function
              // - Sets up voices and initializes smoothers.
              //=========================================================================
              void prepare(PrepareSpecs specs)
              {
                  sampleRate = specs.sampleRate;
                  filtersLeft.prepare(specs);
                  filtersRight.prepare(specs);
      
                  for (auto& v : filtersLeft)
                      v.prepare(sampleRate);
                  for (auto& v : filtersRight)
                      v.prepare(sampleRate);
      
                  // Set smoothing time (10ms)
                  cutoffSmooth.reset(sampleRate, 0.01);
                  resonanceSmooth.reset(sampleRate, 0.01);
                  keytrackSmooth.reset(sampleRate, 0.01);
      
                  cutoffSmooth.setCurrentAndTargetValue(cutoffFrequency);
                  resonanceSmooth.setCurrentAndTargetValue(resonance);
                  keytrackSmooth.setCurrentAndTargetValue(keytrackAmount);
              }
      
              void reset()
              {
                  for (auto& v : filtersLeft)
                      v.reset();
                  for (auto& v : filtersRight)
                      v.reset();
              }
      
              //=========================================================================
              // Process Audio
              // - Updates parameters per sample using smoothed values.
              //=========================================================================
              template <typename ProcessDataType>
              void process(ProcessDataType& data)
              {
                  auto& fixData = data.template as<ProcessData<getFixChannelAmount()>>();
                  auto audioBlock = fixData.toAudioBlock();
      
                  float* leftCh = audioBlock.getChannelPointer(0);
                  float* rightCh = audioBlock.getChannelPointer(1);
                  int numSamples = static_cast<int>(data.getNumSamples());
      
                  for (int i = 0; i < numSamples; ++i)
                  {
                      // Get per-sample smoothed parameter values
                      float cVal = cutoffSmooth.getNextValue();
                      float rVal = resonanceSmooth.getNextValue();
                      float kVal = keytrackSmooth.getNextValue();
      
                      // Update each voice with new smoothed parameters
                      for (auto& v : filtersLeft)
                      {
                          v.setCutoff(cVal);
                          v.setResonance(rVal);
                          v.setKeytrack(kVal);
                          v.applyChangesIfNeeded();
                      }
                      for (auto& v : filtersRight)
                      {
                          v.setCutoff(cVal);
                          v.setResonance(rVal);
                          v.setKeytrack(kVal);
                          v.applyChangesIfNeeded();
                      }
      
                      // Process sample for each voice in series
                      float outL = leftCh[i];
                      float outR = rightCh[i];
                      for (auto& v : filtersLeft)
                          outL = v.processSample(outL);
                      for (auto& v : filtersRight)
                          outR = v.processSample(outR);
      
                      leftCh[i] = outL;
                      rightCh[i] = outR;
                  }
              }
      
              template <typename FrameDataType>
              void processFrame(FrameDataType& data) {}
      
              //=========================================================================
              // AudioEffect Class: OB8-Style Voice-Level Filter
              // - Implements the filter logic with Eigen optimizations.
              //=========================================================================
              class AudioEffect
              {
              public:
                  AudioEffect() = default;
      
                  inline void prepare(float fs)
                  {
                      sampleRate = fs;
                      baseCutoff = 1000.0f;
                      resonance = 0.0f;
                      rDamping = 3.4f; // Fixed damping value
                      keytrackAmount = 1.0f;
                      storedNote = 60;
                      reset();
                      dirtyFlags = 0;
                      updateAll(); // Initial matrix calculations
                  }
      
                  inline void reset()
                  {
                      x.setZero();
                  }
      
                  //=====================================================================
                  // Parameter Setting Functions
                  //=====================================================================
                  enum Dirty : uint32_t
                  {
                      changedCutoff = 1 << 0,
                      changedResonance = 1 << 1,
                      changedKeytrack = 1 << 2,
                      changedNote = 1 << 3
                  };
      
                  inline void setCutoff(float c)
                  {
                      baseCutoff = c;
                      dirtyFlags |= changedCutoff;
                  }
                  inline void setResonance(float r)
                  {
                      resonance = r;
                      dirtyFlags |= changedResonance;
                  }
                  inline void setKeytrack(float kt)
                  {
                      keytrackAmount = kt;
                      dirtyFlags |= changedKeytrack;
                  }
                  inline void setNoteNumber(int n)
                  {
                      storedNote = n;
                      dirtyFlags |= changedNote;
                  }
                  inline void applyChangesIfNeeded()
                  {
                      if (dirtyFlags != 0)
                          updateAll();
                  }
      
                  //=====================================================================
                  // Process Sample Function
                  // - Processes a single sample and applies the filter.
                  //=====================================================================
                  inline float processSample(float input)
                  {
                      constexpr float noiseFloor = 0.005f;
                      input += noiseFloor * (noiseGen.nextFloat() - 0.5f); // Add slight noise for stability
      
                      // State update and output calculation (Eigen optimized)
                      Eigen::Vector4f temp = N * x + gB * input;
                      Eigen::Vector4f newX = MInv * temp;
                      float out = C.dot(newX) * gainComp;
                      x = newX;
                      return out;
                  }
      
              private:
                  //=====================================================================
                  // Update All Parameters Function
                  // - Recalculates matrices and inversion when parameters change.
                  //=====================================================================
                  inline void updateAll()
                  {
                      // Keytracking calculation
                      float semitones = (static_cast<float>(storedNote) - 60.0f) * keytrackAmount;
                      float noteFactor = std::exp2f(0.0833333f * semitones);
                      float fc = baseCutoff * noteFactor;
      
                      // Cutoff frequency clamping [20 Hz, 20 kHz]
                      if (fc < 20.0f)
                          fc = 20.0f;
                      if (fc > 20000.0f)
                          fc = 20000.0f;
      
                      // Compensation offset for frequency shift (empirical)
                      float compensationOffset = 0.44f * fc - 30.0f;
                      if (compensationOffset < 0.0f)
                          compensationOffset = 0.0f;
      
                      // Resonance compensation
                      fc -= (resonance / 35.0f) * compensationOffset;
      
                      // Re-clamp cutoff after compensation
                      if (fc < 20.0f)
                          fc = 20.0f;
                      if (fc > 20000.0f)
                          fc = 20000.0f;
      
                      // TPT Warped frequency and g parameter
                      const float fsRecip = 1.0f / sampleRate;
                      const float factor = MathConstants<float>::pi * fc * fsRecip;
                      const float warped = std::tan(factor);
                      g = 2.0f * warped;
      
                      // Matrix construction and inversion
                      buildContinuousTimeSystem();
                      buildDiscreteTimeMatrices();
                      MInv = M.inverse();
                      C = Ccont; // Set output vector
      
                      // Gain Compensation (Resonance dependent)
                      if (dirtyFlags & changedResonance)
                      {
                          gainComp = std::pow(10.0f, (std::sqrt(resonance / 35.0f) * 22.0f) / 20.0f);
                      }
      
                      dirtyFlags = 0; // Clear dirty flags
                  }
      
      
                  //=====================================================================
                  // Build Continuous-Time System Function
                  // - Defines the continuous-time state-space matrices (Acont, Bcont, Ccont).
                  //=====================================================================
                  inline void buildContinuousTimeSystem()
                  {
                      const float twoR = 2.0f * rDamping;
                      Acont.setZero();
                      Bcont.setZero();
                      Ccont.setZero();
      
                      // State equations (matrix A)
                      Acont(0, 0) = -twoR;
                      Acont(0, 1) = -1.0f;
                      Acont(0, 3) = -resonance;
                      Acont(1, 0) = 1.0f;
                      Acont(2, 1) = 1.0f;
                      Acont(2, 2) = -twoR;
                      Acont(2, 3) = -1.0f;
                      Acont(3, 2) = 1.0f;
      
                      // Input matrix B (input to x1')
                      Bcont(0) = 1.0f;
      
                      // Output matrix C (output from x4)
                      Ccont(3) = 1.0f;
                  }
      
                  //=====================================================================
                  // Build Discrete-Time Matrices Function
                  // - Discretizes the continuous-time system using TPT transform (M, N, gB).
                  //=====================================================================
                  inline void buildDiscreteTimeMatrices()
                  {
                      Eigen::Matrix4f gA = g * Acont;
                      M = Eigen::Matrix4f::Identity() - gA;
                      N = Eigen::Matrix4f::Identity() + gA;
                      gB = g * Bcont;
                  }
      
                  //=====================================================================
                  // Member Variables (AudioEffect)
                  //=====================================================================
                  float sampleRate = 44100.0f;
                  float baseCutoff = 1000.0f;
                  float resonance = 0.0f;
                  float rDamping = 3.4f;        // Fixed damping parameter
                  float keytrackAmount = 1.0f;
                  int storedNote = 60;
                  float g = 0.0f;             // Warped frequency parameter
      
                  Eigen::Vector4f x = Eigen::Vector4f::Zero();         // State vector
                  Eigen::Matrix4f Acont = Eigen::Matrix4f::Zero();    // Continuous-time A matrix
                  Eigen::Vector4f Bcont = Eigen::Vector4f::Zero();     // Continuous-time B matrix
                  Eigen::Vector4f Ccont = Eigen::Vector4f::Zero();     // Continuous-time C matrix
                  Eigen::Matrix4f M = Eigen::Matrix4f::Zero();         // Discrete-time M matrix
                  Eigen::Matrix4f N = Eigen::Matrix4f::Zero();         // Discrete-time N matrix
                  Eigen::Matrix4f MInv = Eigen::Matrix4f::Zero();      // Inverted M matrix
                  Eigen::Vector4f gB = Eigen::Vector4f::Zero();        // Discrete-time gB matrix
                  Eigen::Vector4f C = Eigen::Vector4f::Zero();         // Discrete-time C matrix (output)
      
                  float gainComp = 1.0f;          // Gain compensation factor
                  uint32_t dirtyFlags = 0;        // Flags to track parameter changes
                  juce::Random noiseGen;          // Random number generator for noise
              };
      
              //=========================================================================
              // Parameter Setting (Per Sample Update)
              //=========================================================================
              template <int P>
              void setParameter(double val)
              {
                  if (P == 0)
                  {
                      cutoffFrequency = static_cast<float>(val);
                      cutoffSmooth.setTargetValue(cutoffFrequency);
                  }
                  else if (P == 1)
                  {
                      resonance = static_cast<float>(val);
                      if (resonance < 0.0f)
                          resonance = 0.0f;
                      if (resonance > 35.0f)
                          resonance = 35.0f;
                      resonanceSmooth.setTargetValue(resonance);
                  }
                  else if (P == 2)
                  {
                      keytrackAmount = static_cast<float>(val);
                      keytrackSmooth.setTargetValue(keytrackAmount);
                  }
              }
      
              void createParameters(ParameterDataList& data)
              {
                  {
                      parameter::data p("Cutoff Frequency", { 100.0, 20000.0, 1.0 });
                      registerCallback<0>(p);
                      p.setDefaultValue(1000.0f);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Resonance", { 0.0, 35.0, 0.01 });
                      registerCallback<1>(p);
                      p.setDefaultValue(0.0f);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Keytrack Amount", { -1.0, 1.0, 0.5 });
                      registerCallback<2>(p);
                      p.setDefaultValue(0.0f);
                      data.add(std::move(p));
                  }
              }
      
              void setExternalData(const ExternalData& data, int index) {}
      
              //=========================================================================
              // Note Handling
              //=========================================================================
              void handleHiseEvent(HiseEvent& e)
              {
                  if (e.isNoteOn())
                  {
                      filtersLeft.get().setNoteNumber(e.getNoteNumber());
                      filtersLeft.get().applyChangesIfNeeded();
      
                      filtersRight.get().setNoteNumber(e.getNoteNumber());
                      filtersRight.get().applyChangesIfNeeded();
                  }
              }
      
          private:
              PolyData<AudioEffect, NV> filtersLeft;
              PolyData<AudioEffect, NV> filtersRight;
          };
      }
      

      *edit: As an extra, I include a resonant ladder filter, which has an extremely wide range for resonance, from ultra soft to spiky. Setup is the same as the OB Filter, this script replaces the same effect. Rename the node if you want to use it separately.

      #pragma once
      #include <JuceHeader.h>
      #include <cmath>
      #include "src\eigen-master\Eigen\Dense"
      
      // Werner Filter
      
      namespace project
      {
          using namespace juce;
          using namespace hise;
          using namespace scriptnode;
      
          template <int NV> // NV = number of voices
          struct Griffin_OBFilter : public data::base
          {
              SNEX_NODE(Griffin_OBFilter);
      
              struct MetadataClass
              {
                  SN_NODE_ID("Griffin_OBFilter");
              };
      
              static constexpr bool isModNode() { return false; }
              static constexpr bool isPolyphonic() { return NV > 1; }
              static constexpr bool hasTail() { return false; }
              static constexpr bool isSuspendedOnSilence() { return false; }
              static constexpr int  getFixChannelAmount() { return 2; }
      
              static constexpr int  NumTables = 0;
              static constexpr int  NumSliderPacks = 0;
              static constexpr int  NumAudioFiles = 0;
              static constexpr int  NumFilters = 0;
              static constexpr int  NumDisplayBuffers = 0;
      
              // Outer-level parameters and smoothing objects
              float cutoffFrequency = 1000.0f;
              // Combined resonance/damping control
              float resonance = 0.0f;
              float keytrackAmount = 1.0f;
              float rDamp = 1.06f;  // SVF damping (r)
              float sampleRate = 44100.0f;
      
              SmoothedValue<float> cutoffSmooth;
              SmoothedValue<float> resonanceSmooth;
              SmoothedValue<float> keytrackSmooth;
              SmoothedValue<float> dampingSmooth;
      
              void prepare(PrepareSpecs specs)
              {
                  sampleRate = specs.sampleRate;
                  // Prepare voice-level filters
                  filtersLeft.prepare(specs);
                  filtersRight.prepare(specs);
      
                  for (auto& fl : filtersLeft)
                      fl.prepare(sampleRate);
                  for (auto& fr : filtersRight)
                      fr.prepare(sampleRate);
      
                  // Initialize per-sample smoothing (10ms ramp time)
                  cutoffSmooth.reset(sampleRate, 0.01);
                  resonanceSmooth.reset(sampleRate, 0.01);
                  keytrackSmooth.reset(sampleRate, 0.01);
                  dampingSmooth.reset(sampleRate, 0.01);
      
                  cutoffSmooth.setCurrentAndTargetValue(cutoffFrequency);
                  resonanceSmooth.setCurrentAndTargetValue(resonance);
                  keytrackSmooth.setCurrentAndTargetValue(keytrackAmount);
                  dampingSmooth.setCurrentAndTargetValue(rDamp);
              }
      
              void reset()
              {
                  for (auto& fl : filtersLeft)
                      fl.reset();
                  for (auto& fr : filtersRight)
                      fr.reset();
              }
      
              // Per-sample processing with parameter smoothing
              template <typename ProcessDataType>
              void process(ProcessDataType& data)
              {
                  auto& fixData = data.template as<ProcessData<getFixChannelAmount()>>();
                  auto audioBlock = fixData.toAudioBlock();
      
                  float* leftChannelData = audioBlock.getChannelPointer(0);
                  float* rightChannelData = audioBlock.getChannelPointer(1);
                  int numSamples = static_cast<int>(data.getNumSamples());
      
                  for (int i = 0; i < numSamples; ++i)
                  {
                      // Get per-sample smoothed parameters
                      float cVal = cutoffSmooth.getNextValue();
                      float rVal = resonanceSmooth.getNextValue();
                      float ktVal = keytrackSmooth.getNextValue();
                      float dVal = dampingSmooth.getNextValue();
      
                      // Update all voices with the current smoothed values
                      for (auto& fl : filtersLeft)
                      {
                          fl.setCutoff(cVal);
                          fl.setResonance(rVal);
                          fl.setKeytrack(ktVal);
                          fl.setDamping(dVal);
                          fl.applyChangesIfNeeded();
                      }
                      for (auto& fr : filtersRight)
                      {
                          fr.setCutoff(cVal);
                          fr.setResonance(rVal);
                          fr.setKeytrack(ktVal);
                          fr.setDamping(dVal);
                          fr.applyChangesIfNeeded();
                      }
      
                      // Process the sample for each voice in series
                      float inL = leftChannelData[i];
                      float inR = rightChannelData[i];
      
                      for (auto& fl : filtersLeft)
                          inL = fl.processSample(inL);
                      for (auto& fr : filtersRight)
                          inR = fr.processSample(inR);
      
                      leftChannelData[i] = inL;
                      rightChannelData[i] = inR;
                  }
              }
      
              template <typename FrameDataType>
              void processFrame(FrameDataType& data) {}
      
              // Voice-level effect: Two 2nd-order SVFs + global feedback, EXACT delay-free method
              class AudioEffect
              {
              public:
                  AudioEffect() = default;
      
                  void prepare(float fs)
                  {
                      sampleRate = fs;
                      baseCutoff = 1000.0f;
                      resonance = 0.0f;
                      rDamping = 1.06f;
                      keytrackAmount = 1.0f;
                      storedNote = 60;
      
                      reset();
                      dirtyFlags = 0;
                      updateAll(); // Build A, B, C, compute g, discretize & invert, etc.
                  }
      
                  void reset()
                  {
                      x = Eigen::Vector4f::Zero();
                  }
      
                  // Dirty flag enum for parameter changes
                  enum Dirty : uint32_t
                  {
                      changedCutoff = 1 << 0,
                      changedResonance = 1 << 1,
                      changedDamping = 1 << 2,
                      changedKeytrack = 1 << 3,
                      changedNote = 1 << 4
                  };
      
                  inline void setCutoff(float c)
                  {
                      baseCutoff = c;
                      dirtyFlags |= changedCutoff;
                  }
                  inline void setResonance(float r)
                  {
                      resonance = r;
                      dirtyFlags |= changedResonance;
                  }
                  inline void setDamping(float d)
                  {
                      rDamping = d;
                      dirtyFlags |= changedDamping;
                  }
                  inline void setKeytrack(float kt)
                  {
                      keytrackAmount = kt;
                      dirtyFlags |= changedKeytrack;
                  }
                  inline void setNoteNumber(int n)
                  {
                      storedNote = n;
                      dirtyFlags |= changedNote;
                  }
                  inline void applyChangesIfNeeded()
                  {
                      if (dirtyFlags != 0)
                          updateAll();
                  }
      
                  // Process a single sample using the discrete-time state update
                  inline float processSample(float input)
                  {
                      Eigen::Vector4f temp = N * x + gB * input;
                      Eigen::Vector4f newX = MInv * temp;
                      float out = C.dot(newX) * gainComp;
                      x = newX;
                      return out;
                  }
      
              private:
                  inline void updateAll()
                  {
                      // Compute effective cutoff with keytracking
                      float semitones = (static_cast<float>(storedNote) - 60.0f) * keytrackAmount;
                      float noteFactor = std::exp2f(0.0833333f * semitones);
                      float fc = baseCutoff * noteFactor;
                      if (fc < 20.0f)
                          fc = 20.0f;
                      float limit = 0.49f * sampleRate;
                      if (fc > limit)
                          fc = limit;
      
                      // Compute TPT warp coefficient: g = 2 * tan(pi * (fc / fs))
                      float norm = fc / sampleRate;
                      float warped = std::tan(MathConstants<float>::pi * norm);
                      g = 2.0f * warped;
      
                      // Build continuous-time state-space (Acont, Bcont, Ccont)
                      buildContinuousTimeSystem();
                      // Build discrete-time matrices via TPT: M = I - g*Acont, N = I + g*Acont, and gB = g*Bcont
                      buildDiscreteTimeMatrices();
                      // Invert M using Eigen's fixed-size matrix inversion
                      MInv = M.inverse();
                      // For output, C (discrete-time) equals Ccont
                      C = Ccont;
      
                      // Apply gain compensation: design so that resonance=3 produces an 11 dB boost.
                      gainComp = std::pow(10.0f, (std::sqrt(resonance / 3.0f) * 11.0f) / 20.0f);
      
                      dirtyFlags = 0;
                  }
      
                  inline void buildContinuousTimeSystem()
                  {
                      // Using damping (rDamping) and feedback gain (resonance)
                      const float r = rDamping;
                      const float kf = resonance;
      
                      Acont << -2.0f * r, -1.0f, 0.0f, -kf,
                          1.0f, 0.0f, 0.0f, 0.0f,
                          0.0f, 1.0f, -2.0f * r, -1.0f,
                          0.0f, 0.0f, 1.0f, 0.0f;
                      Bcont << 1.0f, 0.0f, 0.0f, 0.0f;
                      Ccont << 0.0f, 0.0f, 0.0f, 1.0f;
                  }
      
                  inline void buildDiscreteTimeMatrices()
                  {
                      M = Eigen::Matrix4f::Identity() - g * Acont;
                      N = Eigen::Matrix4f::Identity() + g * Acont;
                      gB = g * Bcont;
                  }
      
                  float sampleRate = 44100.0f;
                  float baseCutoff = 1000.0f;
                  float resonance = 0.0f;
                  float rDamping = 1.06f;
                  float keytrackAmount = 1.0f;
                  int storedNote = 60;
                  float g = 0.0f;
                  float gainComp = 1.0f;
                  uint32_t dirtyFlags = 0;
      
                  Eigen::Matrix4f Acont = Eigen::Matrix4f::Zero();
                  Eigen::Vector4f Bcont = Eigen::Vector4f::Zero();
                  Eigen::Vector4f Ccont = Eigen::Vector4f::Zero();
                  Eigen::Matrix4f M = Eigen::Matrix4f::Zero();
                  Eigen::Matrix4f N = Eigen::Matrix4f::Zero();
                  Eigen::Matrix4f MInv = Eigen::Matrix4f::Zero();
                  Eigen::Vector4f gB = Eigen::Vector4f::Zero();
                  Eigen::Vector4f C = Eigen::Vector4f::Zero();
                  Eigen::Vector4f x = Eigen::Vector4f::Zero();
              };
      
              // External parameter setters with combined resonance/damping control.
              template <int P>
              void setParameter(double val)
              {
                  if (P == 0)
                  {
                      cutoffFrequency = static_cast<float>(val);
                      cutoffSmooth.setTargetValue(cutoffFrequency);
                      for (auto& fl : filtersLeft)
                      {
                          fl.setCutoff(cutoffFrequency);
                          fl.applyChangesIfNeeded();
                      }
                      for (auto& fr : filtersRight)
                      {
                          fr.setCutoff(cutoffFrequency);
                          fr.applyChangesIfNeeded();
                      }
                  }
                  else if (P == 1)
                  {
                      float extRes = static_cast<float>(val);
                      // Using a threshold of 1.0 within the control range [0, 1.3]
                      if (extRes >= 1.0f)
                      {
                          float t = (extRes - 1.0f) / 0.3f; // t in [0,1] for extRes in [1.0,1.3]
                          resonance = t * 2.0f;  // Map from 0 to 2.0
                          rDamp = 0.6f;
                      }
                      else
                      {
                          resonance = 0.0f; // Hold resonance at its lowest value
                          // Map extRes in [0,1] to rDamp in [2.0,0.6]
                          rDamp = 0.6f + ((1.0f - extRes) / 1.0f) * (2.0f - 0.6f);
                      }
                      resonanceSmooth.setTargetValue(resonance);
                      dampingSmooth.setTargetValue(rDamp);
                      for (auto& fl : filtersLeft)
                      {
                          fl.setResonance(resonance);
                          fl.setDamping(rDamp);
                          fl.applyChangesIfNeeded();
                      }
                      for (auto& fr : filtersRight)
                      {
                          fr.setResonance(resonance);
                          fr.setDamping(rDamp);
                          fr.applyChangesIfNeeded();
                      }
                  }
                  else if (P == 2)
                  {
                      keytrackAmount = static_cast<float>(val);
                      keytrackSmooth.setTargetValue(keytrackAmount);
                      for (auto& fl : filtersLeft)
                      {
                          fl.setKeytrack(keytrackAmount);
                          fl.applyChangesIfNeeded();
                      }
                      for (auto& fr : filtersRight)
                      {
                          fr.setKeytrack(keytrackAmount);
                          fr.applyChangesIfNeeded();
                      }
                  }
              }
      
              // Parameter definitions for the UI (SVF Damping removed)
              void createParameters(ParameterDataList& data)
              {
                  {
                      parameter::data p("Cutoff Frequency", { 20.0, 20000.0, 1.0 });
                      registerCallback<0>(p);
                      p.setDefaultValue(1000.0f);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Resonance", { 0.0, 1.2, 0.01 });
                      registerCallback<1>(p);
                      p.setDefaultValue(1.0f);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Keytrack Amount", { -1.0, 1.0, 0.01 });
                      registerCallback<2>(p);
                      p.setDefaultValue(0.0f);
                      data.add(std::move(p));
                  }
              }
      
              void setExternalData(const ExternalData& data, int index) {}
      
              // Handle note on events for keytracking
              void handleHiseEvent(HiseEvent& e)
              {
                  if (e.isNoteOn())
                  {
                      filtersLeft.get().setNoteNumber(e.getNoteNumber());
                      filtersLeft.get().applyChangesIfNeeded();
      
                      filtersRight.get().setNoteNumber(e.getNoteNumber());
                      filtersRight.get().applyChangesIfNeeded();
                  }
              }
      
          private:
              PolyData<AudioEffect, NV> filtersLeft;
              PolyData<AudioEffect, NV> filtersRight;
          };
      }
      
      
      posted in C++ Development
      griffinboyG
      griffinboy
    • [Free dsp] C++ FFT

      For @ustk

      Simple FFT implementation using Juce
      (C++ scriptnode)

      https://1drv.ms/u/c/6c19b197e5968b36/EcVjEd7aayFHhItxr2gISeMBUD15DXs-oPHNg9Os9pYXWA?e=EW0gfm

      By default it's a spectral lowpass filter (silences fft bins above the cutoff).
      It has been implemented into a sampler, and into a real-time effect.

      If you want to fully understand how the scripts work, ask chat gpt to walk through the code.

      Christoph's own Hise FFT implementation is likely better: but this'll be a good starting point if you need a custom implementation. If you want to extend it, I recommend investigating a multi resolution approach (Constant Q). This simple fft uses a han window at one resolution, resulting in the general smearing of content and transients. I'd like to improve this aspect in the future.

      posted in C++ Development
      griffinboyG
      griffinboy
    • [Free Dsp] Lookahead Limiter (true peak)

      Lookahead Limiter designed for low distortion.
      It's not as fancy as Pro-L but it does the trick.
      Oversample it for True Peak limiting.

      Stereo Only
      CPU: (0.2%)

      *Important warning: DAW latency correction!
      The latency changes depending on attack time in this script, but in hise it's common to report a single latency value to the host. The solution to this is to either report the max latency to the host (measure from the max attack time), or to leave the attack time fixed, so that you can report a single latency value that won't change. You should be fine leaving it at 1 - 10 ms. A bit of attack (lookahead) is needed for smooth limiting.

      Download the script here:

      https://1drv.ms/u/c/6c19b197e5968b36/EQFgIcgl83VJviZoznFZWnEBleCXFfAD-KejdhTV9Q6IwQ?e=EyH8fM

      Setup tutorial:

      1. Create a 3rd party C++ node in HISE named "Griffin_Limiter"
      2. Compile the initial DLL in HISE using "Compile dsp networks as .dll"
      3. Replace the generated "Griffin_Limiter.h" (located in ProjectName/DspNetworks/ThirdParty) with my limiter header file
      4. Re-compile the final DLL in HISE using "Compile dsp networks as .dll"
      

      Ignore the audio artefacts in the video. My encoder is being a fool. The sound quality of the dsp is fine.

      You can read the script here:
      Griffin_Limiter.h:

      #pragma once
      #include <JuceHeader.h>
      #include <cstdint>
      #include <cmath>
      #include <vector>
      #include <algorithm>
      #include <limits>
      #include <cstring>
      
      /**
      
      ==============| by GriffinBoy (2025) |==============
      
      This node implements a Lookahead Stereo Limiter
      
      Features:
      - ExpSmootherCascade: Four-stage exponential smoothing with separate attack and release coefficients.
      - PeakHoldCascade: Eight-stage peak detection cascade for accurate envelope following.
      - Integrated lookahead
      - Parameter smoothing 
      
      Integration Steps for HISE:
      1. Create a 3rd party C++ node in HISE named " Griffin_Limiter "
      2. Compile the initial DLL in HISE using "Compile dsp networks as .dll"
      3. Replace the generated "Griffin_Limiter.h" (located in ProjectName/DspNetworks/ThirdParty) with this header file you are reading now
      4. Re-compile the final DLL in HISE using "Compile dsp networks as .dll"
      
      Author: GriffinBoy (2025)
      License: UwU
      
      Based on the following paper:
      Sanfilippo, D. (2022).Envelope following via cascaded exponential smoothers for low - distortion peak limiting and maximisation.In Proceedings of the International Faust Conference, Saint - Étienne, France.
      
      */
      
      
      namespace project
      {
      
      #ifndef M_PI
      #define M_PI 3.14159265358979323846
      #endif
      
          using namespace juce;
          using namespace hise;
          using namespace scriptnode;
      
          namespace FunctionsClasses {
      
              //------------------------------------------------------------------------------
              // DelaySmooth: A delay line with smooth crossfading between two integer delay
              // lines to produce a click-free, Doppler-free delay variation.
              // Fixed types: head = uint16_t and sample type = float.
              class DelaySmooth {
              private:
                  const size_t bufferLen = 2ul << (8 * sizeof(uint16_t) - 1); 
                  size_t delay = 0;
                  size_t interpolationTime = 1024;
                  size_t lowerDelay = 0;
                  size_t upperDelay = 0;
                  float interpolation = 0.0f;
                  float interpolationStep = 1.0f / float(interpolationTime);
                  float increment = interpolationStep;
                  uint16_t lowerReadPtr = 0;
                  uint16_t upperReadPtr = 0;
                  uint16_t writePtr = 0;
                  std::vector<float> bufferLeft;
                  std::vector<float> bufferRight;
              public:
                  void SetDelay(size_t _delay) { delay = _delay; }
                  void SetInterpolationTime(size_t _interpTime) {
                      interpolationTime = std::max<size_t>(1, _interpTime);
                      interpolationStep = 1.0f / float(interpolationTime);
                  }
                  void Reset() {
                      std::fill(bufferLeft.begin(), bufferLeft.end(), 0.0f);
                      std::fill(bufferRight.begin(), bufferRight.end(), 0.0f);
                  }
                  void Process(float** xVec, float** yVec, size_t vecLen) {
                      float* xLeft = xVec[0];
                      float* xRight = xVec[1];
                      float* yLeft = yVec[0];
                      float* yRight = yVec[1];
                      for (size_t n = 0; n < vecLen; n++) {
                          bufferLeft[writePtr] = xLeft[n];
                          bufferRight[writePtr] = xRight[n];
                          bool lowerReach = (interpolation == 0.0f);
                          bool upperReach = (interpolation == 1.0f);
                          bool lowerDelayChanged = (delay != lowerDelay);
                          bool upperDelayChanged = (delay != upperDelay);
                          bool startDownwardInterp = (upperReach && upperDelayChanged);
                          bool startUpwardInterp = (lowerReach && lowerDelayChanged);
                          float incrementPathsUp[2] = { increment, interpolationStep };
                          float incrementPathsDown[2] = { incrementPathsUp[startUpwardInterp], -interpolationStep };
                          increment = incrementPathsDown[startDownwardInterp];
                          size_t lowerDelayPaths[2] = { lowerDelay, delay };
                          size_t upperDelayPaths[2] = { upperDelay, delay };
                          lowerDelay = lowerDelayPaths[upperReach];
                          upperDelay = upperDelayPaths[lowerReach];
                          // Use explicit bitwise AND for modulo (bufferLen = 65536)
                          lowerReadPtr = static_cast<uint16_t>(writePtr - lowerDelay) & 0xFFFF;
                          upperReadPtr = static_cast<uint16_t>(writePtr - upperDelay) & 0xFFFF;
                          writePtr++;
                          interpolation = std::max(0.0f, std::min(1.0f, interpolation + increment));
                          yLeft[n] = interpolation * (bufferLeft[upperReadPtr] - bufferLeft[lowerReadPtr]) + bufferLeft[lowerReadPtr];
                          yRight[n] = interpolation * (bufferRight[upperReadPtr] - bufferRight[lowerReadPtr]) + bufferRight[lowerReadPtr];
                      }
                  }
                  DelaySmooth() {
                      bufferLeft.resize(bufferLen);
                      bufferRight.resize(bufferLen);
                  }
                  DelaySmooth(size_t _delay, size_t _interpTime) {
                      bufferLeft.resize(bufferLen);
                      bufferRight.resize(bufferLen);
                      delay = _delay;
                      interpolationTime = std::max<size_t>(1, _interpTime);
                      interpolationStep = 1.0f / float(interpolationTime);
                  }
              };
      
              //------------------------------------------------------------------------------
              // ExpSmootherCascade: Cascaded one-pole exponential smoothers (4 stages)
              // with separate attack and release coefficients.
              class ExpSmootherCascade {
              private:
                  static constexpr size_t stages = 4;
                  const float coeffCorrection = 1.0f / std::sqrt(std::pow(2.0f, 1.0f / float(stages)) - 1.0f);
                  const float epsilon = std::numeric_limits<float>::epsilon();
                  float SR = 48000.0f;
                  float T = 1.0f / SR;
                  const float twoPiC = 2.0f * static_cast<float>(M_PI) * coeffCorrection;
                  float twoPiCT = twoPiC * T;
                  float attTime = 0.001f;
                  float relTime = 0.01f;
                  float attCoeff = std::exp(-twoPiCT / attTime);
                  float relCoeff = std::exp(-twoPiCT / relTime);
                  float coeff[2] = { relCoeff, attCoeff };
                  float output[stages] = { 0.0f, 0.0f, 0.0f, 0.0f };
              public:
                  void SetSR(float _SR) {
                      SR = std::max(_SR, 1.0f);
                      T = 1.0f / SR;
                      twoPiCT = twoPiC * T;
                  }
                  void SetAttTime(float _attTime) {
                      attTime = std::max(epsilon, _attTime);
                      attCoeff = std::exp(-twoPiCT / attTime);
                      coeff[1] = attCoeff;
                  }
                  void SetRelTime(float _relTime) {
                      relTime = std::max(epsilon, _relTime);
                      relCoeff = std::exp(-twoPiCT / relTime);
                      coeff[0] = relCoeff;
                  }
                  void Reset() { std::memset(output, 0, sizeof(output)); }
                  void Process(float* xVec, float* yVec, size_t vecLen) {
                      for (size_t n = 0; n < vecLen; n++) {
                          float input = xVec[n];
                          // Unrolled stage 0
                          bool isAttackPhase = (input > output[0]);
                          float coeffVal = isAttackPhase ? attCoeff : relCoeff;
                          output[0] = input + coeffVal * (output[0] - input);
                          input = output[0];
                          // Unrolled stage 1
                          isAttackPhase = (input > output[1]);
                          coeffVal = isAttackPhase ? attCoeff : relCoeff;
                          output[1] = input + coeffVal * (output[1] - input);
                          input = output[1];
                          // Unrolled stage 2
                          isAttackPhase = (input > output[2]);
                          coeffVal = isAttackPhase ? attCoeff : relCoeff;
                          output[2] = input + coeffVal * (output[2] - input);
                          input = output[2];
                          // Unrolled stage 3
                          isAttackPhase = (input > output[3]);
                          coeffVal = isAttackPhase ? attCoeff : relCoeff;
                          output[3] = input + coeffVal * (output[3] - input);
                          yVec[n] = output[3];
                      }
                  }
                  ExpSmootherCascade() {}
                  ExpSmootherCascade(float _SR, float _attTime, float _relTime) {
                      SR = std::max(_SR, 1.0f);
                      T = 1.0f / SR;
                      twoPiCT = twoPiC * T;
                      attTime = std::max(epsilon, _attTime);
                      relTime = std::max(epsilon, _relTime);
                      attCoeff = std::exp(-twoPiCT / attTime);
                      relCoeff = std::exp(-twoPiCT / relTime);
                      coeff[0] = relCoeff;
                      coeff[1] = attCoeff;
                  }
              };
      
              //------------------------------------------------------------------------------
              // PeakHoldCascade: Cascaded peak-holders (8 stages fixed) to approximate a max filter.
              // The cascade splits the hold time to detect secondary peaks.
              class PeakHoldCascade {
              private:
                  static constexpr size_t stages = 8;
                  float SR = 48000.0f;
                  float holdTime = 0.001f;
                  const float oneOverStages = 1.0f / float(stages);
                  size_t holdTimeSamples = std::rint(holdTime * oneOverStages * SR);
                  size_t timer[stages];
                  float output[stages];
              public:
                  void SetSR(float _SR) {
                      SR = std::max(_SR, 1.0f);
                      holdTimeSamples = std::rint(holdTime * oneOverStages * SR);
                  }
                  void SetHoldTime(float _holdTime) {
                      holdTime = std::max(0.0f, _holdTime);
                      holdTimeSamples = std::rint(holdTime * oneOverStages * SR);
                  }
                  void Reset() {
                      std::memset(timer, 0, sizeof(timer));
                      std::memset(output, 0, sizeof(output));
                  }
                  void Process(float* xVec, float* yVec, size_t vecLen) {
                      for (size_t n = 0; n < vecLen; n++) {
                          float input = std::fabs(xVec[n]);
                          bool release;
                          // Unrolled stage 0
                          release = (input >= output[0]) || (timer[0] >= holdTimeSamples);
                          timer[0] = release ? 0 : (timer[0] + 1);
                          output[0] = release ? input : output[0];
                          input = output[0];
                          // Unrolled stage 1
                          release = (input >= output[1]) || (timer[1] >= holdTimeSamples);
                          timer[1] = release ? 0 : (timer[1] + 1);
                          output[1] = release ? input : output[1];
                          input = output[1];
                          // Unrolled stage 2
                          release = (input >= output[2]) || (timer[2] >= holdTimeSamples);
                          timer[2] = release ? 0 : (timer[2] + 1);
                          output[2] = release ? input : output[2];
                          input = output[2];
                          // Unrolled stage 3
                          release = (input >= output[3]) || (timer[3] >= holdTimeSamples);
                          timer[3] = release ? 0 : (timer[3] + 1);
                          output[3] = release ? input : output[3];
                          input = output[3];
                          // Unrolled stage 4
                          release = (input >= output[4]) || (timer[4] >= holdTimeSamples);
                          timer[4] = release ? 0 : (timer[4] + 1);
                          output[4] = release ? input : output[4];
                          input = output[4];
                          // Unrolled stage 5
                          release = (input >= output[5]) || (timer[5] >= holdTimeSamples);
                          timer[5] = release ? 0 : (timer[5] + 1);
                          output[5] = release ? input : output[5];
                          input = output[5];
                          // Unrolled stage 6
                          release = (input >= output[6]) || (timer[6] >= holdTimeSamples);
                          timer[6] = release ? 0 : (timer[6] + 1);
                          output[6] = release ? input : output[6];
                          input = output[6];
                          // Unrolled stage 7
                          release = (input >= output[7]) || (timer[7] >= holdTimeSamples);
                          timer[7] = release ? 0 : (timer[7] + 1);
                          output[7] = release ? input : output[7];
                          yVec[n] = output[7];
                      }
                  }
                  PeakHoldCascade() { Reset(); }
                  PeakHoldCascade(float _SR, float _holdTime) {
                      SR = std::max(_SR, 1.0f);
                      holdTime = _holdTime;
                      holdTimeSamples = std::rint(holdTime * oneOverStages * SR);
                      Reset();
                  }
              };
      
              //------------------------------------------------------------------------------
              // Limiter: Lookahead peak-limiter combining DelaySmooth, PeakHoldCascade, and
              // ExpSmootherCascade. Computes a stereo envelope and applies a gain to keep
              // signal peaks below a given threshold.
              template<typename real>
              class Limiter {
              private:
                  float SR = 48000.0f;
                  float T = 1.0f / SR;
                  const float twoPi = 2.0f * static_cast<float>(M_PI);
                  const float epsilon = std::numeric_limits<float>::epsilon();
                  const float smoothParamCutoff = 20.0f;
                  float attack = 0.01f;
                  float hold = 0.0f;
                  float release = 0.05f;
                  float dBThreshold = -6.0f;
                  float linThreshold = std::pow(10.0f, dBThreshold * 0.05f);
                  float dBPreGain = 0.0f;
                  float linPreGain = 1.0f;
                  float smoothPreGain = 0.0f;
                  float smoothThreshold = 0.0f;
                  float smoothParamCoeff = std::exp(-twoPi * smoothParamCutoff * T);
                  size_t lookaheadDelay = 0;
                  DelaySmooth delay;
                  static constexpr size_t numberOfPeakHoldSections = 8;
                  static constexpr size_t numberOfSmoothSections = 4;
                  const float oneOverPeakSections = 1.0f / float(numberOfPeakHoldSections);
                  PeakHoldCascade peakHolder;
                  ExpSmootherCascade expSmoother;
              public:
                  void SetSR(float _SR) {
                      SR = std::max(_SR, 1.0f);
                      T = 1.0f / SR;
                      smoothParamCoeff = std::exp(-twoPi * smoothParamCutoff * T);
                      peakHolder.SetSR(SR);
                      expSmoother.SetSR(SR);
                  }
                  void SetAttTime(float _attack) {
                      attack = std::max(epsilon, _attack);
                      lookaheadDelay = std::rint(attack * oneOverPeakSections * SR) * numberOfPeakHoldSections;
                      delay.SetDelay(lookaheadDelay);
                      delay.SetInterpolationTime(lookaheadDelay);
                      expSmoother.SetAttTime(attack);
                      peakHolder.SetHoldTime(attack + hold);
                  }
                  void SetHoldTime(float _hold) {
                      hold = std::max(0.0f, _hold);
                      peakHolder.SetHoldTime(attack + hold);
                  }
                  void SetRelTime(float _release) {
                      release = std::max(epsilon, _release);
                      expSmoother.SetRelTime(release);
                  }
                  void SetThreshold(float _threshold) {
                      dBThreshold = std::max(-120.0f, _threshold);
                      linThreshold = std::pow(10.0f, dBThreshold * 0.05f);
                  }
                  void SetPreGain(float _preGain) {
                      dBPreGain = _preGain;
                      linPreGain = std::pow(10.0f, dBPreGain * 0.05f);
                  }
                  void Reset() {
                      delay.Reset();
                      peakHolder.Reset();
                      expSmoother.Reset();
                  }
                  // Process takes separate input and output stereo buffers.
                  // The design uses an in-place delay so the input buffer is overwritten.
                  void Process(float** xVec, float** yVec, size_t vecLen) {
                      // Get channel pointers once outside loops.
                      float* xLeft = xVec[0];
                      float* xRight = xVec[1];
                      float* yLeft = yVec[0];
                      float* yRight = yVec[1];
                      // Merge pre-gain smoothing and envelope computation.
                      for (size_t n = 0; n < vecLen; n++) {
                          smoothPreGain = linPreGain + smoothParamCoeff * (smoothPreGain - linPreGain);
                          xLeft[n] *= smoothPreGain;
                          xRight[n] *= smoothPreGain;
                          yLeft[n] = std::max(std::fabs(xLeft[n]), std::fabs(xRight[n]));
                      }
                      // Process envelope with peak-hold cascade.
                      peakHolder.Process(yLeft, yLeft, vecLen);
                      // Smooth and clip envelope to threshold.
                      for (size_t n = 0; n < vecLen; n++) {
                          smoothThreshold = linThreshold + smoothParamCoeff * (smoothThreshold - linThreshold);
                          yLeft[n] = std::max(yLeft[n], smoothThreshold);
                          yRight[n] = smoothThreshold;
                      }
                      // Smooth envelope with exponential cascade.
                      expSmoother.Process(yLeft, yLeft, vecLen);
                      // Apply lookahead delay (in-place).
                      delay.Process(xVec, xVec, vecLen);
                      // Compute attenuation gain and apply to delayed signal in one loop.
                      for (size_t n = 0; n < vecLen; n++) {
                          float gain = yRight[n] / yLeft[n];
                          yLeft[n] = gain * xLeft[n];
                          yRight[n] = gain * xRight[n];
                      }
                  }
                  Limiter() {}
                  Limiter(float _SR, float _dBPreGain, float _attack, float _hold, float _release, float _dBThreshold) {
                      SR = std::max(_SR, 1.0f);
                      dBPreGain = _dBPreGain;
                      attack = std::max(epsilon, _attack);
                      hold = std::max(0.0f, _hold);
                      release = std::max(epsilon, _release);
                      dBThreshold = std::max(-120.0f, _dBThreshold);
                  }
              };
      
          } // end namespace FunctionsClasses
      
          //------------------------------------------------------------------------------
          // SNEX Node - Stereo Limiter Node Implementation
          //------------------------------------------------------------------------------
          template <int NV>
          struct Griffin_Limiter : public data::base
          {
              SNEX_NODE(Griffin_Limiter);
      
              struct MetadataClass
              {
                  SN_NODE_ID("Griffin_Limiter");
              };
      
              // Node Properties 
              static constexpr bool isModNode() { return false; }
              static constexpr bool isPolyphonic() { return NV > 1; }
              static constexpr bool hasTail() { return false; }
              static constexpr bool isSuspendedOnSilence() { return false; }
              static constexpr int getFixChannelAmount() { return 2; }
      
              static constexpr int NumTables = 0;
              static constexpr int NumSliderPacks = 0;
              static constexpr int NumAudioFiles = 0;
              static constexpr int NumFilters = 0;
              static constexpr int NumDisplayBuffers = 0;
      
              // Create an instance of our DSP Limiter 
              FunctionsClasses::Limiter<float> limiter;
      
              // Scratch buffers to avoid per-block allocation.
          private:
              std::vector<float> scratchInLeft, scratchInRight;
              std::vector<float> scratchOutLeft, scratchOutRight;
          public:
              //--------------------------------------------------------------------------
              // Main Processing Functions
              //--------------------------------------------------------------------------
              void prepare(PrepareSpecs specs)
              {
                  float sampleRate = specs.sampleRate;
                  limiter.SetSR(sampleRate);
                  limiter.Reset();
                  // Preallocate scratch buffers (use maximum BlockSize if available, otherwise default to 512).
                  int blockSize = (specs.blockSize > 0) ? specs.blockSize : 512;
                  scratchInLeft.resize(blockSize);
                  scratchInRight.resize(blockSize);
                  scratchOutLeft.resize(blockSize);
                  scratchOutRight.resize(blockSize);
              }
      
              void reset() {}
      
              template <typename ProcessDataType>
              inline void process(ProcessDataType& data)
              {
                  auto& fixData = data.template as<ProcessData<getFixChannelAmount()>>();
                  auto audioBlock = fixData.toAudioBlock();
                  float* leftChannelData = audioBlock.getChannelPointer(0);
                  float* rightChannelData = audioBlock.getChannelPointer(1);
                  int numSamples = data.getNumSamples();
      
                  juce::FloatVectorOperations::copy(scratchInLeft.data(), leftChannelData, numSamples);
                  juce::FloatVectorOperations::copy(scratchInRight.data(), rightChannelData, numSamples);
      
                  float* inBuffers[2] = { scratchInLeft.data(), scratchInRight.data() };
                  float* outBuffers[2] = { scratchOutLeft.data(), scratchOutRight.data() };
      
                  limiter.Process(inBuffers, outBuffers, numSamples);
      
                  juce::FloatVectorOperations::copy(leftChannelData, scratchOutLeft.data(), numSamples);
                  juce::FloatVectorOperations::copy(rightChannelData, scratchOutRight.data(), numSamples);
              }
      
              //--------------------------------------------------------------------------
              // Parameter Handling
              //--------------------------------------------------------------------------
              template <int P>
              void setParameter(double v)
              {
                  if (P == 0) {
                      limiter.SetPreGain(static_cast<float>(v));
                  }
                  else if (P == 1) {
                      // Convert from ms to seconds.
                      limiter.SetAttTime(static_cast<float>(v * 0.001));
                  }
                  else if (P == 2) {
                      limiter.SetHoldTime(static_cast<float>(v * 0.001));
                  }
                  else if (P == 3) {
                      limiter.SetRelTime(static_cast<float>(v * 0.001));
                  }
                  else if (P == 4) {
                      limiter.SetThreshold(static_cast<float>(v));
                  }
              }
      
              void createParameters(ParameterDataList& data)
              {
                  {
                      parameter::data p("PreGain (dB)", { -24.0, 24.0, 0.1 });
                      registerCallback<0>(p);
                      p.setDefaultValue(0.0);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Attack (ms)", { 1.0, 500.0, 1.0 });
                      registerCallback<1>(p);
                      p.setDefaultValue(10.0);
                      p.setSkewForCentre(80.0f);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Hold (ms)", { 0.0, 100.0, 1.0 });
                      registerCallback<2>(p);
                      p.setDefaultValue(5.0);
                      p.setSkewForCentre(30.0f);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Release (ms)", { 1.0, 2500.0, 1.0 });
                      registerCallback<3>(p);
                      p.setDefaultValue(80.0);
                      p.setSkewForCentre(800.0f);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Ceiling (dB)", { -60.0, 0.0, 0.1 });
                      registerCallback<4>(p);
                      p.setDefaultValue(-6.0);
                      data.add(std::move(p));
                  }
              }
      
              void setExternalData(const ExternalData& ed, int index)
              {
                  // Not needed.
              }
      
              void handleHiseEvent(HiseEvent& e)
              {
                  // Not needed.
              }
      
              template <typename FrameDataType>
              void processFrame(FrameDataType& data)
              {
                  // Not needed.
              }
          };
      
      } // end namespace project
      
      posted in C++ Development
      griffinboyG
      griffinboy
    • RE: What is the process for writing my own module (not scriptnode)

      @Christoph-Hart

      Thanks, I'm already making use of the new Global Cable feature!
      It works great.

      Here we are sending the drawing straight from Hise into a c++ node for playback!

      posted in C++ Development
      griffinboyG
      griffinboy
    • RE: About C++ Nodes

      @spider

      Okay I solved it.
      I think I'll make a whole video about this process later for any noobs like myself.

      posted in ScriptNode
      griffinboyG
      griffinboy
    • [Free Dsp] Unfinished Self Oscillating Filter

      A work in progress analog filter.

      • Requires oversampling or the high frequencies will alias. You can hear this in the video when the cutoff goes really high.

      • This node uses a lot of CPU (2%-13%) and has an inaccurate cutoff frequency.

      I'm releasing it now because I probably won't give out the code when it's finished.
      However I thought it may be fun for someone to mess with or expand upon.

      Although it's a flawed design, you can replace the nonlinearity with any distortion element. You can also tap out audio from any of the stages. Here we tap out of the final stage, but you may take audio from anywhere in the circuit matrix... So there are at least a few things to play with in the design.



      Griffin_Growl_Stereo.h

      #pragma once
      #include <JuceHeader.h>
      #include <cmath>
      #include <array>
      #include <algorithm>
      
      #ifndef M_PI
      #define M_PI 3.14159265358979323846
      #endif
      
      #ifndef NOISE_FEEDBACK_VOLUME
      #define NOISE_FEEDBACK_VOLUME 0.008f
      #endif
      
      #ifndef INPUT_SIGNAL_THRESHOLD
      #define INPUT_SIGNAL_THRESHOLD 1e-6f
      #endif
      
      // Fast tanh approximation
      struct TanhHelper {
          static inline float tanh(float x) {
              float x2 = x * x;
              float sh = x * (1.f + x2 * (1.f / 6.f + x2 * (1.f / 120.f)));
              return sh / std::sqrt(1.f + sh * sh);
          }
      };
      
      // Transistor nonlinearity lookup table for (2/M_PI)*atan(7.5*x)
      namespace transistorHelpers {
          inline float lookupAdvancedTransistorNonlinearity(float x) {
              static const int TABLE_SIZE = 4096;
              static const float maxInput = 2.0f;
              static const float invStep = (TABLE_SIZE - 1) / (2.0f * maxInput);
              static float lookupTable[TABLE_SIZE];
              static bool tableInitialized = false;
              if (!tableInitialized) {
                  const float step = 2.0f * maxInput / (TABLE_SIZE - 1);
                  for (int i = 0; i < TABLE_SIZE; i++) {
                      float xi = -maxInput + i * step;
                      lookupTable[i] = (2.0f / M_PI) * std::atan(7.5f * xi);
                  }
                  tableInitialized = true;
              }
              float clampedX = std::clamp(x, -maxInput, maxInput);
              float index = (clampedX + maxInput) * invStep;
              int indexInt = static_cast<int>(index);
              float frac = index - indexInt;
              return lookupTable[indexInt] * (1.f - frac) + lookupTable[indexInt + 1] * frac;
          }
          inline float advancedTransistorNonlinearity(float x, float /*drive*/) {
              return lookupAdvancedTransistorNonlinearity(x);
          }
      }
      
      namespace project {
          using namespace juce;
          using namespace hise;
          using namespace scriptnode;
      
          // Stereo filter using Newton-Raphson iteration.
          // Caches the filter coefficient (cachedBaseG) and maintains separate states for left/right channels.
          class JunoFilterStereo {
          public:
              JunoFilterStereo()
                  : cutoff(1000.f), resonance(1.f), drive(7.5f), sr(44100.0),
                  errorThresh(0.000001f), cachedBaseG(std::tan(1000.f * M_PI / 44100.0))
              {
                  for (int i = 0; i < 4; ++i) {
                      yL[i] = 0.f;
                      yR[i] = 0.f;
                  }
                  sL[0] = sL[1] = sL[2] = sL[3] = 0.f;
                  sR[0] = sR[1] = sR[2] = sR[3] = 0.f;
              }
              inline void setCutoff(float c) {
                  if (cutoff != c) {
                      cutoff = c;
                      cachedBaseG = std::tan(cutoff * M_PI / sr);
                  }
              }
              inline void setResonance(float r) { resonance = r; }
              inline void setDrive(float) { drive = 7.5f; }
              inline void prepare(double sr_) {
                  sr = sr_;
                  cachedBaseG = std::tan(cutoff * M_PI / sr);
              }
              inline void reset() {
                  for (int i = 0; i < 4; ++i) {
                      yL[i] = 0.f;
                      yR[i] = 0.f;
                  }
                  sL[0] = sL[1] = sL[2] = sL[3] = 0.f;
                  sR[0] = sR[1] = sR[2] = sR[3] = 0.f;
              }
              // Process one stereo sample; returns {left, right}.
              inline std::pair<float, float> processSample(float inL, float inR) {
                  const float g = cachedBaseG;
                  // Left channel
                  const float noiseL = (NOISE_FEEDBACK_VOLUME > 0.f && std::abs(inL) > INPUT_SIGNAL_THRESHOLD) ?
                      NOISE_FEEDBACK_VOLUME * (randGen.nextFloat() * 2.f - 1.f) : 0.f;
                  for (int iter = 0; iter < 20; ++iter) {
                      const float prev_yL3 = yL[3];
                      float nl0 = transistorHelpers::advancedTransistorNonlinearity(inL - yL[0] - resonance * yL[3] + noiseL, drive);
                      float nl1 = transistorHelpers::advancedTransistorNonlinearity(yL[0] - yL[1], drive);
                      float nl2 = transistorHelpers::advancedTransistorNonlinearity(yL[1] - yL[2], drive);
                      float nl3 = transistorHelpers::advancedTransistorNonlinearity(yL[2] - yL[3], drive);
                      float f0 = g * nl0 + sL[0] - yL[0];
                      float f1 = g * nl1 + sL[1] - yL[1];
                      float f2 = g * nl2 + sL[2] - yL[2];
                      float f3 = g * nl3 + sL[3] - yL[3];
                      float h0 = 1.f - nl0 * nl0;
                      float h1 = 1.f - nl1 * nl1;
                      float h2 = 1.f - nl2 * nl2;
                      float h3 = 1.f - nl3 * nl3;
                      float j00 = -g * h0 - 1.f;
                      float j03 = -g * resonance * h0;
                      float j10 = g * h1;
                      float j11 = -g * h1 - 1.f;
                      float j21 = g * h2;
                      float j22 = -g * h2 - 1.f;
                      float j32 = g * h3;
                      float j33 = -g * h3 - 1.f;
                      float den = j00 * j11 * j22 * j33 - j03 * j10 * j21 * j32;
                      yL[0] += (f1 * j03 * j21 * j32 - f0 * j11 * j22 * j33 - f2 * j03 * j11 * j32 + f3 * j03 * j11 * j22) / den;
                      yL[1] += (f0 * j10 * j22 * j33 - f1 * j00 * j22 * j33 + f2 * j03 * j10 * j32 - f3 * j03 * j10 * j22) / den;
                      yL[2] += (f1 * j00 * j21 * j33 - f0 * j10 * j21 * j33 - f2 * j00 * j11 * j33 + f3 * j03 * j10 * j21) / den;
                      yL[3] += (f0 * j10 * j21 * j32 - f1 * j00 * j21 * j32 + f2 * j00 * j11 * j32 - f3 * j00 * j11 * j22) / den;
                      if (std::abs(yL[3] - prev_yL3) <= errorThresh)
                          break;
                  }
                  sL[0] = 2.f * yL[0] - sL[0];
                  sL[1] = 2.f * yL[1] - sL[1];
                  sL[2] = 2.f * yL[2] - sL[2];
                  sL[3] = 2.f * yL[3] - sL[3];
                  // Right channel
                  const float noiseR = (NOISE_FEEDBACK_VOLUME > 0.f && std::abs(inR) > INPUT_SIGNAL_THRESHOLD) ?
                      NOISE_FEEDBACK_VOLUME * (randGen.nextFloat() * 2.f - 1.f) : 0.f;
                  for (int iter = 0; iter < 20; ++iter) {
                      const float prev_yR3 = yR[3];
                      float nl0 = transistorHelpers::advancedTransistorNonlinearity(inR - yR[0] - resonance * yR[3] + noiseR, drive);
                      float nl1 = transistorHelpers::advancedTransistorNonlinearity(yR[0] - yR[1], drive);
                      float nl2 = transistorHelpers::advancedTransistorNonlinearity(yR[1] - yR[2], drive);
                      float nl3 = transistorHelpers::advancedTransistorNonlinearity(yR[2] - yR[3], drive);
                      float f0 = g * nl0 + sR[0] - yR[0];
                      float f1 = g * nl1 + sR[1] - yR[1];
                      float f2 = g * nl2 + sR[2] - yR[2];
                      float f3 = g * nl3 + sR[3] - yR[3];
                      float h0 = 1.f - nl0 * nl0;
                      float h1 = 1.f - nl1 * nl1;
                      float h2 = 1.f - nl2 * nl2;
                      float h3 = 1.f - nl3 * nl3;
                      float j00 = -g * h0 - 1.f;
                      float j03 = -g * resonance * h0;
                      float j10 = g * h1;
                      float j11 = -g * h1 - 1.f;
                      float j21 = g * h2;
                      float j22 = -g * h2 - 1.f;
                      float j32 = g * h3;
                      float j33 = -g * h3 - 1.f;
                      float den = j00 * j11 * j22 * j33 - j03 * j10 * j21 * j32;
                      yR[0] += (f1 * j03 * j21 * j32 - f0 * j11 * j22 * j33 - f2 * j03 * j11 * j32 + f3 * j03 * j11 * j22) / den;
                      yR[1] += (f0 * j10 * j22 * j33 - f1 * j00 * j22 * j33 + f2 * j03 * j10 * j32 - f3 * j03 * j10 * j22) / den;
                      yR[2] += (f1 * j00 * j21 * j33 - f0 * j10 * j21 * j33 - f2 * j00 * j11 * j33 + f3 * j03 * j10 * j21) / den;
                      yR[3] += (f0 * j10 * j21 * j32 - f1 * j00 * j21 * j32 + f2 * j00 * j11 * j32 - f3 * j00 * j11 * j22) / den;
                      if (std::abs(yR[3] - prev_yR3) <= errorThresh)
                          break;
                  }
                  sR[0] = 2.f * yR[0] - sR[0];
                  sR[1] = 2.f * yR[1] - sR[1];
                  sR[2] = 2.f * yR[2] - sR[2];
                  sR[3] = 2.f * yR[3] - sR[3];
                  return { yL[3], yR[3] };
              }
          private:
              double sr;
              float cutoff, resonance, drive, errorThresh;
              float cachedBaseG;
              float yL[4], sL[4];
              float yR[4], sR[4];
              juce::Random randGen;
          };
      
          // Polyphonic stereo node.
          template <int NV>
          struct Griffin_Growl_Stereo : public data::base {
              SNEX_NODE(Griffin_Growl_Stereo);
              struct MetadataClass { SN_NODE_ID("Griffin_Growl_Stereo"); };
              static constexpr bool isModNode() { return false; }
              static constexpr bool isPolyphonic() { return NV > 1; }
              static constexpr bool hasTail() { return false; }
              static constexpr bool isSuspendedOnSilence() { return false; }
              static constexpr int getFixChannelAmount() { return 2; }
              static constexpr int NumTables = 0, NumSliderPacks = 0, NumAudioFiles = 0, NumFilters = 0, NumDisplayBuffers = 0;
      
              float cutoffFrequency = 1000.f, resonance = 1.f;
              PolyData<JunoFilterStereo, NV> filters;
      
              inline void prepare(PrepareSpecs specs) {
                  double sr = specs.sampleRate;
                  filters.prepare(specs);
                  for (auto& voice : filters) {
                      voice.prepare(sr);
                      voice.setDrive(7.5f);
                  }
              }
              inline void reset() { for (auto& voice : filters) voice.reset(); }
      
              // Process block: scale inputs using fast tanh, process with filter, output stereo.
              template <typename ProcessDataType>
              inline void process(ProcessDataType& data) {
                  auto& fixData = data.template as<ProcessData<getFixChannelAmount()>>();
                  auto audioBlock = fixData.toAudioBlock();
                  float* leftChannel = audioBlock.getChannelPointer(0);
                  float* rightChannel = audioBlock.getChannelPointer(1);
                  const int numSamples = static_cast<int>(data.getNumSamples());
                  const float tanhConst = TanhHelper::tanh(1.5f);
                  for (int i = 0; i < numSamples; ++i) {
                      float inL = TanhHelper::tanh(1.5f * leftChannel[i]) / tanhConst;
                      float inR = TanhHelper::tanh(1.5f * rightChannel[i]) / tanhConst;
                      float outL = 0.f, outR = 0.f;
                      for (auto& voice : filters) {
                          auto outs = voice.processSample(inL, inR);
                          outL += outs.first;
                          outR += outs.second;
                      }
                      outL /= NV;
                      outR /= NV;
                      leftChannel[i] = outL;
                      rightChannel[i] = outR;
                  }
              }
              template <typename FrameDataType>
              inline void processFrame(FrameDataType& data) {}
      
              // Parameter callback: update voices on change.
              template <int P>
              inline void setParameter(double v) {
                  if (P == 0) {
                      float newVal = static_cast<float>(v);
                      if (cutoffFrequency != newVal) {
                          cutoffFrequency = newVal;
                          for (auto& voice : filters)
                              voice.setCutoff(cutoffFrequency);
                      }
                  }
                  else if (P == 1) {
                      float newVal = static_cast<float>(v);
                      if (resonance != newVal) {
                          resonance = newVal;
                          for (auto& voice : filters)
                              voice.setResonance(resonance);
                      }
                  }
              }
              inline void createParameters(ParameterDataList& data) {
                  parameter::data p1("Cutoff", { 20.0, 4000.0, 0.00001 });
                  registerCallback<0>(p1);
                  p1.setDefaultValue(1000.0);
                  data.add(std::move(p1));
                  parameter::data p2("Resonance", { 0.1, 4.3, 0.00001 });
                  registerCallback<1>(p2);
                  p2.setDefaultValue(1.0);
                  data.add(std::move(p2));
              }
              inline void setExternalData(const ExternalData& ed, int index) {}
              inline void handleHiseEvent(HiseEvent& e) {}
          };
      }
      

      Griffin_Growl_Mono.h

      #pragma once
      #include <JuceHeader.h>
      #include <cmath>
      #include <array>
      #include <algorithm>
      
      #ifndef M_PI
      #define M_PI 3.14159265358979323846
      #endif
      
      #ifndef NOISE_FEEDBACK_VOLUME
      #define NOISE_FEEDBACK_VOLUME 0.008f
      #endif
      
      #ifndef INPUT_SIGNAL_THRESHOLD
      #define INPUT_SIGNAL_THRESHOLD 1e-6f
      #endif
      
      // Fast tanh approximation 
      struct TanhHelper {
          static inline float tanh(float x) {
              float x2 = x * x;
              float sh = x * (1.f + x2 * (1.f / 6.f + x2 * (1.f / 120.f)));
              return sh / std::sqrt(1.f + sh * sh);
          }
      };
      
      // Transistor nonlinearity lookup table for (2/M_PI)*atan(7.5*x)
      namespace transistorHelpers {
          inline float lookupAdvancedTransistorNonlinearity(float x) {
              static const int TABLE_SIZE = 4096;
              static const float maxInput = 2.0f;
              static const float invStep = (TABLE_SIZE - 1) / (2.0f * maxInput);
              static float lookupTable[TABLE_SIZE];
              static bool tableInitialized = false;
              if (!tableInitialized) {
                  const float step = 2.0f * maxInput / (TABLE_SIZE - 1);
                  for (int i = 0; i < TABLE_SIZE; i++) {
                      float xi = -maxInput + i * step;
                      lookupTable[i] = (2.0f / M_PI) * std::atan(7.5f * xi);
                  }
                  tableInitialized = true;
              }
              float clampedX = std::clamp(x, -maxInput, maxInput);
              float index = (clampedX + maxInput) * invStep;
              int indexInt = static_cast<int>(index);
              float frac = index - indexInt;
              return lookupTable[indexInt] * (1.f - frac) + lookupTable[indexInt + 1] * frac;
          }
          inline float advancedTransistorNonlinearity(float x, float /*drive*/) {
              return lookupAdvancedTransistorNonlinearity(x);
          }
      }
      
      namespace project {
          using namespace juce;
          using namespace hise;
          using namespace scriptnode;
      
          // Mono filter using Newton-Raphson iteration.
          // The filter coefficient is cached (cachedBaseG) and updated only when cutoff or sr change.
          class JunoFilterMono {
          public:
              JunoFilterMono()
                  : cutoff(1000.f), resonance(1.f), drive(7.5f), sr(44100.0),
                  errorThresh(0.000001f), cachedBaseG(std::tan(1000.f * M_PI / 44100.0))
              {
                  for (int i = 0; i < 4; ++i)
                      y[i] = 0.f;
                  s[0] = s[1] = s[2] = s[3] = 0.f;
              }
              inline void setCutoff(float c) {
                  if (cutoff != c) {
                      cutoff = c;
                      cachedBaseG = std::tan(cutoff * M_PI / sr);
                  }
              }
              inline void setResonance(float r) { resonance = r; }
              inline void setDrive(float) { drive = 7.5f; }
              inline void prepare(double sr_) {
                  sr = sr_;
                  cachedBaseG = std::tan(cutoff * M_PI / sr);
              }
              inline void reset() {
                  for (int i = 0; i < 4; ++i)
                      y[i] = 0.f;
                  s[0] = s[1] = s[2] = s[3] = 0.f;
              }
      
              // Process one sample using cached filter coefficient.
              inline float processSample(float in) {
                  // Use precomputed base coefficient (cachedBaseG) for all stages.
                  const float g = cachedBaseG;
                  // Generate noise only if input is nonzero.
                  const float noise = (NOISE_FEEDBACK_VOLUME > 0.f && std::abs(in) > INPUT_SIGNAL_THRESHOLD) ?
                      NOISE_FEEDBACK_VOLUME * (randGen.nextFloat() * 2.f - 1.f) : 0.f;
                  // Newton-Raphson iteration (max 20 iterations)
                  for (int iter = 0; iter < 20; ++iter) {
                      const float prev_y3 = y[3];
                      const float nl0 = transistorHelpers::advancedTransistorNonlinearity(in - y[0] - resonance * y[3] + noise, drive);
                      const float nl1 = transistorHelpers::advancedTransistorNonlinearity(y[0] - y[1], drive);
                      const float nl2 = transistorHelpers::advancedTransistorNonlinearity(y[1] - y[2], drive);
                      const float nl3 = transistorHelpers::advancedTransistorNonlinearity(y[2] - y[3], drive);
                      const float f0 = g * nl0 + s[0] - y[0];
                      const float f1 = g * nl1 + s[1] - y[1];
                      const float f2 = g * nl2 + s[2] - y[2];
                      const float f3 = g * nl3 + s[3] - y[3];
                      const float h0 = 1.f - nl0 * nl0;
                      const float h1 = 1.f - nl1 * nl1;
                      const float h2 = 1.f - nl2 * nl2;
                      const float h3 = 1.f - nl3 * nl3;
                      const float j00 = -g * h0 - 1.f;
                      const float j03 = -g * resonance * h0;
                      const float j10 = g * h1;
                      const float j11 = -g * h1 - 1.f;
                      const float j21 = g * h2;
                      const float j22 = -g * h2 - 1.f;
                      const float j32 = g * h3;
                      const float j33 = -g * h3 - 1.f;
                      const float den = j00 * j11 * j22 * j33 - j03 * j10 * j21 * j32;
                      y[0] += (f1 * j03 * j21 * j32 - f0 * j11 * j22 * j33 - f2 * j03 * j11 * j32 + f3 * j03 * j11 * j22) / den;
                      y[1] += (f0 * j10 * j22 * j33 - f1 * j00 * j22 * j33 + f2 * j03 * j10 * j32 - f3 * j03 * j10 * j22) / den;
                      y[2] += (f1 * j00 * j21 * j33 - f0 * j10 * j21 * j33 - f2 * j00 * j11 * j33 + f3 * j03 * j10 * j21) / den;
                      y[3] += (f0 * j10 * j21 * j32 - f1 * j00 * j21 * j32 + f2 * j00 * j11 * j32 - f3 * j00 * j11 * j22) / den;
                      if (std::abs(y[3] - prev_y3) <= errorThresh)
                          break;
                  }
                  s[0] = 2.f * y[0] - s[0];
                  s[1] = 2.f * y[1] - s[1];
                  s[2] = 2.f * y[2] - s[2];
                  s[3] = 2.f * y[3] - s[3];
                  return y[3];
              }
          private:
              double sr;
              float cutoff, resonance, drive, errorThresh;
              float cachedBaseG;
              float y[4], s[4];
              juce::Random randGen;
          };
      
          // Polyphonic mono node.
          template <int NV>
          struct Griffin_Growl_Mono : public data::base {
              SNEX_NODE(Griffin_Growl_Mono);
              struct MetadataClass { SN_NODE_ID("Griffin_Growl_Mono"); };
              static constexpr bool isModNode() { return false; }
              static constexpr bool isPolyphonic() { return NV > 1; }
              static constexpr bool hasTail() { return false; }
              static constexpr bool isSuspendedOnSilence() { return false; }
              static constexpr int getFixChannelAmount() { return 2; }
              static constexpr int NumTables = 0, NumSliderPacks = 0, NumAudioFiles = 0, NumFilters = 0, NumDisplayBuffers = 0;
      
              float cutoffFrequency = 1000.f, resonance = 1.f;
              PolyData<JunoFilterMono, NV> filters;
      
              // Prepare voices; update sample rate only.
              inline void prepare(PrepareSpecs specs) {
                  double sr = specs.sampleRate;
                  filters.prepare(specs);
                  for (auto& voice : filters) {
                      voice.prepare(sr);
                      voice.setDrive(7.5f);
                  }
              }
              inline void reset() { for (auto& voice : filters) voice.reset(); }
      
              // Process audio block: apply fast tanh scaling, sum voices, process with filter, copy mono output to both channels.
              template <typename ProcessDataType>
              inline void process(ProcessDataType& data) {
                  auto& fixData = data.template as<ProcessData<getFixChannelAmount()>>();
                  auto audioBlock = fixData.toAudioBlock();
                  float* leftChannel = audioBlock.getChannelPointer(0);
                  float* rightChannel = audioBlock.getChannelPointer(1);
                  const int numSamples = static_cast<int>(data.getNumSamples());
                  const float tanhConst = TanhHelper::tanh(1.5f);
                  for (int i = 0; i < numSamples; ++i) {
                      float in = TanhHelper::tanh(1.5f * leftChannel[i]) / tanhConst;
                      float out = 0.f;
                      for (auto& voice : filters)
                          out += voice.processSample(in);
                      out /= NV;
                      leftChannel[i] = out;
                      rightChannel[i] = out;
                  }
              }
              template <typename FrameDataType>
              inline void processFrame(FrameDataType& data) {}
      
              // Parameter callback: update voices on change.
              template <int P>
              inline void setParameter(double v) {
                  if (P == 0) {
                      float newVal = static_cast<float>(v);
                      if (cutoffFrequency != newVal) {
                          cutoffFrequency = newVal;
                          for (auto& voice : filters)
                              voice.setCutoff(cutoffFrequency);
                      }
                  }
                  else if (P == 1) {
                      float newVal = static_cast<float>(v);
                      if (resonance != newVal) {
                          resonance = newVal;
                          for (auto& voice : filters)
                              voice.setResonance(resonance);
                      }
                  }
              }
              inline void createParameters(ParameterDataList& data) {
                  parameter::data p1("Cutoff", { 20.0, 4000.0, 0.00001 });
                  registerCallback<0>(p1);
                  p1.setDefaultValue(1000.0);
                  data.add(std::move(p1));
                  parameter::data p2("Resonance", { 0.1, 4.3, 0.00001 });
                  registerCallback<1>(p2);
                  p2.setDefaultValue(0.8);
                  data.add(std::move(p2));
              }
              inline void setExternalData(const ExternalData& ed, int index) {}
              inline void handleHiseEvent(HiseEvent& e) {}
          };
      }
      
      posted in C++ Development
      griffinboyG
      griffinboy
    • RE: Help me understand Waveshaping

      @ccbl

      Clipping is waveshaping.

      Waveshaping is taking an input sample and mapping it to an output sample.

      Audio samples make up waveforms.
      Samples range from -1 to 1, and an audio file is an array of these numbers. A sine wave for example could start at 0, increase to 1 and then back to 0, then to -1 then to 0, and repeat.

      Waveshaping basically looks at the current sample, say this sample is 0.75, and it maps this to a different sample value and outputs that value instead.
      This is raw digital distortion of the wave. You basically take a wave shape, and transform each point in order to end up at a different wave.

      Often continuous functions are used.
      309ca949-50b6-45b0-89cd-88c5bba445e2-image.png
      So that you get a smooth transformation.
      Waveshaping is linear and memoryless.

      Waveshaping is static and will generally not sound like a guitar amp.
      Because analog amps are nonlinear and will have a different effect depending on past audio. Rather than just looking at the current sample and transforming it.

      posted in General Questions
      griffinboyG
      griffinboy
    • RE: 3rd party Header Files in Hise?

      @harris-rosendahl

      Haha, distortion: Some of the most complex dsp out there.

      This is on the to do list, I plan to release a video and some templates for this stuff, but I am currently tied to this very tiring dsp project which has a short deadline.
      Once I am free of it, I shall create the video and release the resources.

      If you want it sooner than later I can make a rough guide and put it out there

      posted in ScriptNode
      griffinboyG
      griffinboy
    • RE: Best resource for learning HISE?

      @andrewz94

      I am in the process of making a series on effects plugins.

      What are you wanting to build? I can change the order that I produce the videos in order to be more helpful to what you want

      posted in General Questions
      griffinboyG
      griffinboy
    • RE: Interpolate or merge filter curves?

      @Straticah
      ikr : )
      I think it was one of the first questions I asked on the forum, so this is very nostalgic.

      posted in General Questions
      griffinboyG
      griffinboy
    • RE: 808 Kick Drum done in Scriptnode (suggestions to improve?)

      @Morphoice

      I've got a WDF framework for hise... it does work, but it's early days. Branched from Chowdsp and translated to use all of the scriptnode stuff.
      It can do RC filters and I can lend it but it's not exactly a finished solution, there are possibly better ways

      posted in ScriptNode
      griffinboyG
      griffinboy
    • RE: HISE Meet Up

      @griffinboy

      If we ever look into the 3rd party c++ node stuff in a meeting with Christoph, I'd be especially interested to know about how to properly interact with external data, and with polydata. I've figured out most of the c++ node stuff, and I'm working on tutorials for them, but these two things I don't fully understand. I can use them, but not efficiently. It especially causes me trouble when it comes to optimizing and I find that the way hise has things set up means that I can't realize optimisation ideas. A lot of the time I feel that I'm battling the hise framework, when I suspect there are likely solutions to most of my issues already built in, unbeknownst to me.

      But then again, perhaps the solution is to learn more c++ and detach more from the scriptnode framework, I'm not sure. I'm always doing overcomplicated stuff 😂

      posted in General Questions
      griffinboyG
      griffinboy
    • RE: Live Granular Patch for FAUST or RNBO

      @treynterrio I happen to be working on a script node granular right now but it's a side project. If it gets done soon I may be able to lend it

      posted in General Questions
      griffinboyG
      griffinboy
    • RE: Next HISE Developer Hang

      @HISEnberg

      I'll get back to you on this in a bit.

      But if you want to get started immediately, I can give you my LUFS script. But I'll come up with a simpler example to give you in a bit. I've been meaning to improve my buffer stuff, infact it's one thing I wish to talk to with Christoph about... because it's possible to integrate directly with the audio buffers in hise, and c++ nodes so that the interface script can also talk to the same buffers, but I've not managed to completely get it right.

      Anyway, Here's Lufs, it's a fairly old and awful script, and you won't be able to run this script but you can read it

      #pragma once
      #include <JuceHeader.h>
      #include <limits>
      #include <atomic>
      #include <cmath>
      #include "src/GlobalCables.h"
      
      #ifndef M_PI
      #define M_PI 3.14159265358979323846
      #endif
      
      namespace project
      {
          using namespace juce;
          using namespace hise;
          using namespace scriptnode;
      
          template <int NV> struct Lufs_In : public data::base
          {
              SNEX_NODE(Lufs_In);
      
              struct MetadataClass
              {
                  SN_NODE_ID("Lufs_In");
              };
      
              static constexpr bool isModNode() { return true; }
              static constexpr bool isPolyphonic() { return NV > 1; }
              static constexpr bool hasTail() { return false; }
              static constexpr bool isSuspendedOnSilence() { return false; }
              static constexpr int getFixChannelAmount() { return 2; }
      
              static constexpr int NumTables = 0;
              static constexpr int NumSliderPacks = 0;
              static constexpr int NumAudioFiles = 0;
              static constexpr int NumFilters = 0;
              static constexpr int NumDisplayBuffers = 0;
      
              // Internal Parameters
              float sampleRate = 44100.0f;
              float blockSize = 512.0f;
              float lufsBlockSize = 400.0f; // Default to 400ms
              float overlap = 0.75f;        // 75% overlap
              ModValue modValue;
      
              // LUFS calculation
              juce::AudioBuffer<float> filteredRingBuffer;
              size_t ringBufferWritePos = 0;
              size_t ringBufferSize = 0;
              size_t hopSize = 0;
              double runningSum = 0.0;
      
              // JUCE IIR Filters
              std::array<juce::IIRFilter, 2> preFilters;
              std::array<juce::IIRFilter, 2> weightingFilters;
      
              // LUFS result
              float currentLUFS = -100.0f;
      
              // Reusable filtered buffer
              juce::AudioBuffer<float> filteredBuffer;
      
              // Thread safety
              std::atomic<bool> parametersChanged{ false };
              juce::CriticalSection processLock;
      
              void prepare(PrepareSpecs specs)
              {
                  juce::ScopedLock sl(processLock);
                  sampleRate = static_cast<float>(specs.sampleRate);
                  blockSize = static_cast<float>(specs.blockSize);
                  updateInternalState();
              }
      
              void updateInternalState()
              {
                  // Clamp and validate parameters
                  lufsBlockSize = juce::jlimit(100.0f, 4000.0f, lufsBlockSize); // 100ms to 4000ms
                  overlap = juce::jlimit(0.0001f, 0.9999f, overlap);            // 0.01% to 99.99%
      
                  // Calculate ring buffer size and ensure it's positive
                  ringBufferSize = static_cast<size_t>(sampleRate * lufsBlockSize / 1000.0f);
                  jassert(ringBufferSize > 0 && "ringBufferSize must be greater than 0");
      
                  hopSize = std::max(static_cast<size_t>(1), static_cast<size_t>(ringBufferSize * (1.0f - overlap)));
                  filteredRingBuffer.setSize(2, static_cast<int>(ringBufferSize));
                  ringBufferWritePos = 0;
                  runningSum = 0.0;
                  filteredBuffer.setSize(2, static_cast<int>(blockSize));
      
                  calculateFilterCoefficients();
                  reset();
              }
      
              void calculateFilterCoefficients()
              {
                  const double epsilon = 1e-12; // Small value to prevent division by zero
      
                  // Pre-filter coefficients (as per your original code)
                  const double db = 3.999843853973347;
                  const double f0 = 1681.974450955533;
                  const double Q = 0.7071752369554196;
                  const double K = std::tan(M_PI * f0 / sampleRate);
      
                  const double Vh = std::pow(10.0, db / 20.0);
                  const double Vb = std::pow(Vh, 0.4996667741545416);
      
                  const double denominator0 = 1.0 + K / Q + K * K + epsilon; // Added epsilon
                  const double denominator1 = 2.0 * (K * K - 1.0) / denominator0;
                  const double denominator2 = (1.0 - K / Q + K * K) / denominator0;
                  const double numerator0 = (Vh + Vb * K / Q + K * K) / denominator0;
                  const double numerator1 = 2.0 * (K * K - Vh) / denominator0;
                  const double numerator2 = (Vh - Vb * K / Q + K * K) / denominator0;
      
                  // Validate coefficients
                  jassert(!std::isnan(numerator0) && !std::isinf(numerator0));
                  jassert(!std::isnan(numerator1) && !std::isinf(numerator1));
                  jassert(!std::isnan(numerator2) && !std::isinf(numerator2));
                  jassert(!std::isnan(denominator1) && !std::isinf(denominator1));
                  jassert(!std::isnan(denominator2) && !std::isinf(denominator2));
      
                  juce::IIRCoefficients preCoeffs(numerator0, numerator1, numerator2,
                      1.0, denominator1, denominator2);
      
                  // Weighting filter coefficients (as per your original code)
                  const double f0_weighting = 38.13547087602444;
                  const double Q_weighting = 0.5003270373238773;
                  const double K_weighting = std::tan(M_PI * f0_weighting / sampleRate);
      
                  const double denominator0_weighting = 1.0 + K_weighting / Q_weighting + K_weighting * K_weighting + epsilon; // Added epsilon
                  const double denominator1_weighting = 2.0 * (K_weighting * K_weighting - 1.0) / denominator0_weighting;
                  const double denominator2_weighting = (1.0 - K_weighting / Q_weighting + K_weighting * K_weighting) / denominator0_weighting;
      
                  // Validate weighting coefficients
                  jassert(!std::isnan(denominator0_weighting) && !std::isinf(denominator0_weighting));
                  jassert(!std::isnan(denominator1_weighting) && !std::isinf(denominator1_weighting));
                  jassert(!std::isnan(denominator2_weighting) && !std::isinf(denominator2_weighting));
      
                  juce::IIRCoefficients weightingCoeffs(1.0, -2.0, 1.0,
                      1.0, denominator1_weighting, denominator2_weighting);
      
                  for (int ch = 0; ch < 2; ++ch)
                  {
                      preFilters[ch].setCoefficients(preCoeffs);
                      weightingFilters[ch].setCoefficients(weightingCoeffs);
                  }
              }
      
              template <typename ProcessDataType>
              void process(ProcessDataType& data)
              {
                  auto& fixData = data.template as<ProcessData<2>>();
                  auto numSamples = fixData.getNumSamples();
      
                  if (numSamples == 0)
                      return;
      
                  // Check if the block is silent
                  bool isSilent = true;
                  for (int ch = 0; ch < 2; ++ch)
                  {
                      auto channelData = fixData[ch];
                      for (int i = 0; i < numSamples; ++i)
                      {
                          if (channelData[i] > 0.0001f)
                          {
                              isSilent = false;
                              break;
                          }
                      }
                      if (!isSilent)
                          break;
                  }
      
                  // If the block is silent, return early
                  if (isSilent)
                  {
                      return;
                  }
      
                  if (parametersChanged.exchange(false))
                  {
                      juce::ScopedLock sl(processLock);
                      updateInternalState();
                  }
      
                  juce::ScopedLock sl(processLock);
      
                  // Ensure the filteredBuffer is large enough
                  if (filteredBuffer.getNumSamples() < numSamples)
                      filteredBuffer.setSize(2, numSamples);
      
                  // Create AudioBlock objects for input and filtered data
                  auto inputBlock = fixData.toAudioBlock();
                  juce::AudioBuffer<float> filteredAudioBuffer(filteredBuffer);
                  auto filteredBlock = juce::dsp::AudioBlock<float>(filteredAudioBuffer);
      
                  for (int ch = 0; ch < 2; ++ch)
                  {
                      // Copy input data to filteredBuffer
                      juce::FloatVectorOperations::copy(filteredBlock.getChannelPointer(ch),
                          inputBlock.getChannelPointer(ch),
                          static_cast<int>(numSamples));
      
                      // Apply pre-filter
                      preFilters[ch].processSamples(filteredBlock.getChannelPointer(ch), numSamples);
      
                      // Apply weighting filter
                      weightingFilters[ch].processSamples(filteredBlock.getChannelPointer(ch), numSamples);
                  }
      
                  // Store filtered samples in ring buffer and update LUFS
                  for (size_t i = 0; i < numSamples; ++i)
                  {
                      double leftSample = static_cast<double>(filteredBlock.getSample(0, static_cast<int>(i)));
                      double rightSample = static_cast<double>(filteredBlock.getSample(1, static_cast<int>(i)));
      
                      // Subtract the square of the oldest samples
                      float oldLeftSample = filteredRingBuffer.getSample(0, static_cast<int>(ringBufferWritePos));
                      float oldRightSample = filteredRingBuffer.getSample(1, static_cast<int>(ringBufferWritePos));
                      runningSum -= static_cast<double>(oldLeftSample * oldLeftSample + oldRightSample * oldRightSample);
      
                      // Add the square of the new samples
                      runningSum += leftSample * leftSample + rightSample * rightSample;
      
                      // Update ring buffer
                      filteredRingBuffer.setSample(0, static_cast<int>(ringBufferWritePos), static_cast<float>(leftSample));
                      filteredRingBuffer.setSample(1, static_cast<int>(ringBufferWritePos), static_cast<float>(rightSample));
                      ringBufferWritePos = (ringBufferWritePos + 1) % ringBufferSize;
      
                      // Calculate LUFS when we've moved by hopSize or if hopSize is 0
                      if (hopSize == 0 || ringBufferWritePos % hopSize == 0)
                      {
                          calculateLUFS();
                      }
                  }
              }
      
              void calculateLUFS()
              {
                  jassert(ringBufferSize > 0 && "ringBufferSize must be greater than 0");
      
                  double meanSquared = runningSum / (2.0 * ringBufferSize);
      
                  if (meanSquared > 1e-12)
                  {
                      currentLUFS = static_cast<float>(-0.691 + 10.0 * std::log10(meanSquared));
                      currentLUFS = std::clamp(currentLUFS, -100.0f, 0.0f);
                  }
                  else
                  {
                      currentLUFS = -100.0f;
                  }
      
                  modValue.setModValue(currentLUFS + 2.96f);
              }
      
              float getLUFS() const { return currentLUFS; }
      
              void handleHiseEvent(HiseEvent& e) {}
      
              void reset()
              {
                  juce::ScopedLock sl(processLock);
                  ringBufferWritePos = 0;
                  runningSum = 0.0;
                  filteredRingBuffer.clear();
                  for (int ch = 0; ch < 2; ++ch)
                  {
                      preFilters[ch].reset();
                      weightingFilters[ch].reset();
                  }
                  currentLUFS = -100.0f;
              }
      
              template <typename T> void processFrame(T& data) {}
      
              int handleModulation(double& value)
              {
                  return modValue.getChangedValue(value);
              }
      
              template <int P> void setParameter(double v)
              {
                  if (P == 0)
                  {
                      lufsBlockSize = static_cast<float>(v);
                      parametersChanged.store(true);
                  }
                  else if (P == 1)
                  {
                      overlap = static_cast<float>(v);
                      parametersChanged.store(true);
                  }
                  reset();
              }
      
              void createParameters(ParameterDataList& data)
              {
                  {
                      parameter::data p("Buffer Size (ms)", { 100.0, 4000.0 });
                      registerCallback<0>(p);
                      p.setDefaultValue(2800.0);
                      data.add(std::move(p));
                  }
                  {
                      parameter::data p("Overlap", { 0.0001, 0.9999 });
                      registerCallback<1>(p);
                      p.setDefaultValue(0.99);
                      data.add(std::move(p));
                  }
              }
          };
      }
      
      posted in General Questions
      griffinboyG
      griffinboy
    • RE: Inspiration

      @Robert-Puza

      My inspiration is to change the film music industry.
      My work mostly centres around replacing sample libraries with tools that teach the user to create their own sound. I interview successful composers and turn their techniques into programs.

      My work is about giving new composers an alternate route to music, that doesn't involve buying a ridiculous amount of expensive and bloated sample libraries, synths and effects, and allows them to (hopefully) learn these very advanced techniques and workflows without paying for a course.

      My original inspiration was simply to deliver sample libraries containing sounds that nobody else knew how to deliver. But after attending a university, and seeing how hopeless the average student film and game composers are, at sound design, I decided to change my mission. Give people the tools and knowledge to directly make their own sounds.

      There is a big gap in the market forming with music tools and sample libraries. All of the big players have been doing a very poor job at keeping up with the future of computer music.

      posted in General Questions
      griffinboyG
      griffinboy
    • RE: Sample Import Tutorial - It's Broken!

      @paper_lung

      In case He doesn't fix it, I'm working on a custom solution now.
      I'll upload it when it's working.

      posted in General Questions
      griffinboyG
      griffinboy
    • RE: More Types of Saturation

      @Zazzi
      Use scriptnode, see the 'Snex Shaper' node.
      When you create a new file inside the node, and then expand the node, you will be able to edit the code, and you will find a function which has a statement that says:
      'Return input'

      To create your own waveshaper distortion, you simply need to return something other than input.
      By doing this, you are manipulating the digital signal, which causes it to distort.
      Here are a couple of algorithms that you can tweak and experiment with:

      return ( 1.7finput + -0.5fMath.pow(input,2.0f) + -0.2fMath.pow(input,3.0f) - 0.75f(Math.pow(input,2.0f) - 1.9f*Math.pow(input,3.0f) + Math.pow(input,4.0f)));

      return 0.6f + (input-0.6f) / Math.pow(1.0f+((input-0.6f)/(1.0f-0.6f)) , 2.0f);

      If you want simple digital distortion, I recommend searching for waveshaper algorithms such as these, and experiment with different ways of implementing them.
      Things get fun when you have multiple waveshapers mixed (wet/dry) one after the other, with different lowpass filters inbetween the waveshapers.
      I recommend watching a youtube video named 'fifty shades of distortion' for some inspiration.

      Edit:
      If you are wanting to emulate an Analog style saturation, things get much more complex. You have to start using some really advanced maths. I am currently working on a tube circuit model, which I'll release here when it's done. Using that, you could recreate the circuit of different tube amps and get an accurate distortion out.
      These typically sound better than waveshapers becuase they react very dynamically to different volume levels, which sounds very lively.

      posted in General Questions
      griffinboyG
      griffinboy