Don't know if this is the done thing really, but I wanted to show off:
https://youtu.be/1kMHloRQLcM

Best posts made by Orvillain
-
I wrote a reverb
-
RE: I wrote a reverb
@Chazrox I might do a video or two on everything I've learned!
-
RE: Need filmstrip animations
@d-healey I really like that UI. Very simple, accessible, and smooth looking - for lack of a better word!
-
RE: Can We PLEASE Just Get This Feature DONE
Free mankini with every commercial license???
-
RE: I wrote a reverb
@Chazrox said in I wrote a reverb:
@Orvillain Please.
I've been waiting for some dsp videos! I've been watching ADC's everyday on baby topics just to familiarize myself with the lingo and what nots. I think im ready to start diving in! There are some pretty wicked dsp guys in here for sure and I'd love to get some tutuorials for writing c++ nodes.
There's two guys who got me started in this. One is a dude called Geraint Luff aka SignalSmith. This is probably his most accessible video:
https://youtu.be/6ZK2GoiyotkThen the other guy of course is Sean Costello of ValhallaDSP fame:
https://valhalladsp.com/2021/09/22/getting-started-with-reverb-design-part-2-the-foundations/
https://valhalladsp.com/2021/09/23/getting-started-with-reverb-design-part-3-online-resources/In essence, here's the journey; assuming you know at least a little bit of C++
- Learn how to create a ring buffer (aka my Ring Delay thread)
- Learn how to create an all-pass filter using a ring buffer.
- Understand how fractional delays work, and the various types of interpolation.
- Learn how to manage feedback loops.
Loads of resources out there for sure!
-
RE: Orv's ScriptNode+SNEX Journey
Lesson 5 - SNEX code in a bit more detail.
So I'm by no means an expert in C or C++ - in fact I only just recently started learning it. But here's what I've sussed out in regards to the HISE template.... and template is exactly the right word, because the first line is:
template <int NV> struct audio_loader
Somewhere under the hood, HISE must be setup to send in an integer into any SNEX node, that integer corresponding to a voice. NV = new voice perhaps, or number of voices ????
The line above declares a template that takes this NV integer in, and creates a struct called audio_loader for each instance of NV. Indeed we can prove this by running the following code:
template <int NV> struct audio_loader { SNEX_NODE(audio_loader); ExternalData data; double note = 0.0; // Initialise the processing specs here void prepare(PrepareSpecs ps) { } // Reset the processing pipeline here void reset() { } // Process the signal here template <typename ProcessDataType> void process(ProcessDataType& data) { } // Process the signal as frame here template <int C> void processFrame(span<float, C>& data) { } // Process the MIDI events here void handleHiseEvent(HiseEvent& e) { double note = e.getNoteNumber(); Console.print(note); } // Use this function to setup the external data void setExternalData(const ExternalData& d, int index) { data = d; } // Set the parameters here template <int P> void setParameter(double v) { } };
There are only three things happening here:
- We set the ExternalData as in a previous post.
- We establish a variable with the datatype of double called 'note' and we initialise it as 0.0. But this value will never hold because....
- In the handleHiseEvent() method, we use e.getNoteNumber() and we assign this to the note variable. We then print the note variable out inside of the handleHiseEvent() method.
Now when we run this script, any time we play a midi note, the console will show us the note number that we pressed. This is even true if you play chords, or in a scenario where no note off events occur.
That's a long winded way of saying that a SNEX node is run for each active voice; at least when it is within a ScriptNode Synthesiser dsp network.
The next line in the script after the template is established is:
SNEX_NODE(audio_loader);
This is pretty straight forward. The text you pass here has to match the name of the script loaded inside your SNEX node - not the name of the SNEX node itself.
Here you can see my SNEX node is just called: snex_node.
But the script loaded into it is called audio_loader, and so the reference to SNEX_NODE inside the script has to also reference audio_loader.
-
RE: scriptAudioWaveForm and updating contents
@d-healey said in scriptAudioWaveForm and updating contents:
@Orvillain Did you try,
AudioWaveform.set("processorId", value);
?Yeah I did, and it does update it based on a follow up AudioWaveform.get('processorId') call - but the UI component doesn't seem to update, and still shows data from the previous processorId. When I compile the script, then the UI updates one time... but not on subsequent calls to the set method.
I figured I needed to call some kind of update() function after setting the processorId, but no such luck so far.
-
RE: Ring Buffer design
Here is a super contrived example. If you compile this as a node, and add it in your scriptnode layout... it will delay the right signal by 2 seconds.
#pragma once #include <JuceHeader.h> namespace project { using namespace juce; using namespace hise; using namespace scriptnode; static inline float cubic4(float s0, float s1, float s2, float s3, float f) { float a0 = -0.5f * s0 + 1.5f * s1 - 1.5f * s2 + 0.5f * s3; float a1 = s0 - 2.5f * s1 + 2.0f * s2 - 0.5f * s3; float a2 = -0.5f * s0 + 0.5f * s2; float a3 = s1; return ((a0 * f + a1) * f + a2) * f + a3; } struct RingDelay { // holds all of the sample data std::vector<float> buf; // write position int w = 0; // bitmask for fast wrap-around (which is why the buffer must always be power-of-2) int mask = 0; // sets the size of the buffer according to requested size in samples // will set the buffer to a power-of-two size above the requested capacity // for example - minCapacitySamples==3000, n==4096 void setSize(int minCapacitySamples) { // start off with n=1 int n = 1; // keep doubling n until it is greater or equal to minCapacitySamples while (n < minCapacitySamples) { n <<= 1; } // set the size of the buffer to n, and fill with zeros buf.assign(n, 0.0f); // mask is now n-1; 4095 in the example mask = n - 1; // reset the write pointer to zero w = 0; } // push a sample value into the buffer at the write position // this will always wrap around the capacity length because of the mask value being set prior void push(float x) { buf[w] = x; // set the current write position to the sample value w = (w + 1) & mask; // increment w by 1. Use bitwise 'AND' operator to perform a wrap } // Performs a cubic interpolation read operation on the buffer at the specified sample position // This can be a fractional number float readCubic(float delaySamples) const { // w is the next write position. Read back from that according to delaySamples. float rp = static_cast<float>(w) - delaySamples; // wrap this read pointer into the range 0-size, where size=mask+1 rp -= std::floor(rp / static_cast<float>(mask + 1)) * static_cast<float>(mask + 1); // the floor of rp - the integer part int i1 = static_cast<int>(rp); // the decimal part float f = rp - static_cast<float>(i1); // grab the neighbours around i1 int i0 = (i1 - 1) & mask; int i2 = (i1 + 1) & mask; int i3 = (i1 + 2) & mask; // feed those numbers into the cubic interpolator return cubic4(buf[i0], buf[i1 & mask], buf[i2], buf[i3], f); } // returns the size of the buffer int size() const { return mask + 1; } // clear the buffer without changing the size and reset the write pointer void clear() { std::fill(buf.begin(), buf.end(), 0.0f); w = 0; } }; // ==========================| The node class with all required callbacks |========================== template <int NV> struct RingBufferExp: public data::base { // Metadata Definitions ------------------------------------------------------------------------ SNEX_NODE(RingBufferExp); struct MetadataClass { SN_NODE_ID("RingBufferExp"); }; // set to true if you want this node to have a modulation dragger static constexpr bool isModNode() { return false; }; static constexpr bool isPolyphonic() { return NV > 1; }; // set to true if your node produces a tail static constexpr bool hasTail() { return false; }; // set to true if your doesn't generate sound from silence and can be suspended when the input signal is silent static constexpr bool isSuspendedOnSilence() { return false; }; // Undefine this method if you want a dynamic channel count static constexpr int getFixChannelAmount() { return 2; }; // Define the amount and types of external data slots you want to use static constexpr int NumTables = 0; static constexpr int NumSliderPacks = 0; static constexpr int NumAudioFiles = 0; static constexpr int NumFilters = 0; static constexpr int NumDisplayBuffers = 0; // components double sampleRate = 48000.0; RingDelay rd; // Helpers // Converts milliseconds to samples and returns as an integer static inline int msToSamplesInt(float ms, double fs) { return (int)std::ceil(ms * fs / 1000.0); } // Converts milliseconds to samples and returns as a float static inline float msToSamplesFloat(float ms, double fs) { return (float)(ms * fs / 1000.0); } // Scriptnode Callbacks ------------------------------------------------------------------------ void prepare(PrepareSpecs specs) { // update the sampleRate constant to be the current sample rate sampleRate = specs.sampleRate; // we arbitrarily invent a pad guard number to add to the length of the ring delay const int guard = 128; // set the size using msToSamplesInt because setSize expects an integer rd.setSize(msToSamplesInt(10000.0f, sampleRate) + guard); } void reset() { } void handleHiseEvent(HiseEvent& e) { } template <typename T> void process(T& data) { static constexpr int NumChannels = getFixChannelAmount(); // Cast the dynamic channel data to a fixed channel amount auto& fixData = data.template as<ProcessData<NumChannels>>(); // Create a FrameProcessor object auto fd = fixData.toFrameData(); while(fd.next()) { // Forward to frame processing processFrame(fd.toSpan()); } } template <typename T> void processFrame(T& data) { // Separate out the stereo input float L = data[0]; float R = data[1]; // Calculate the delay time we want in samples - expects float float dTime = msToSamplesFloat(2000.0f, sampleRate); // read the value according to the delay time we just setup float dR = rd.readCubic(dTime); // Push just the right channel into our ring delay // remember, this will auto-increment the write pointer rd.push(R); // left channel - Write the original audio back to the datastream data[0] = L; // Right channel - Write the delayed audio back to the datastream data[1] = dR; } int handleModulation(double& value) { return 0; } void setExternalData(const ExternalData& data, int index) { } // Parameter Functions ------------------------------------------------------------------------- template <int P> void setParameter(double v) { if (P == 0) { // This will be executed for MyParameter (see below) jassertfalse; } } void createParameters(ParameterDataList& data) { { // Create a parameter like this parameter::data p("MyParameter", { 0.0, 1.0 }); // The template parameter (<0>) will be forwarded to setParameter<P>() registerCallback<0>(p); p.setDefaultValue(0.5); data.add(std::move(p)); } } }; }
-
RE: c++ function optimization using vectorization or SIMD???
@griffinboy said in c++ function optimization using vectorization or SIMD???:
It's more popular nowadays to store waveforms in frequency domain using FFT, and to silence bins above Nyquist before inverse FFT. Either that or use filters to make mipmaps (multiple copies of your waveform at different pitches, with antialiasing filters applied, baked into the copies, play back the appropriate pre-antialiased file for the pitch) optionally doing so at 2x oversampling and using additional interpolation to remove aliasing that happens from extra processes that happen in Realtime.
Cheers dude! I was aware of this, but I wanted to see how far I could get with sinc. Turns out, quite far! I've got 22% CPU usage for about 30 voices now. Which isn't really super optimal, but it was a fun project.
That paper you linked me a while back - https://www.mp3-tech.org/programmer/docs/resampler.pdf - was what got me interested.
I think I understand the process you mean though, for the mipmapping approach. Something like:
- Oversample original audio x2 (juce::dsp::oversampling can handle this)
- Set up a root note
- For mip-maps below the root note - lowpass and downsample (dsp::FilterDesign::designFIRLowpassWindowMethod then keep every 2nd sample)
- For mip-maps above the root note - upsample and then lowpass (use the same oversampling approach here for the upsampling and then the same kind of FIR filter???)
- Store each level, and then move on to the playback engine
I think that'd be the approach??
Playback engine-wise, I'd still need to have an interpolation method to playback notes in between the mipmap levels I would guess. Can Hermite cover this, or do I need to go polyphase still?
Latest posts made by Orvillain
-
RE: Force mono node?
@Christoph-Hart said in Force mono node?:
Not sure I understand, where do you do the switch? And why do you want to switch if you know that a node is only processing a single channel?
If you're node is known to be mono, then just process the first channel in the implementation, or if you want to force a generic node to only process the first channel, you can use the
wrap::fix
template to force mono processing.// instead of this member declaration MyClass obj; // use wrap::fix<1, MyClass> obj;
Your code is probably throwing an assertion at
auto* right = block.getChannelPointer(1);
if you have created the audio block from a single channel process data object.I was thinking in situations where I want to do a tubescreamer emulation, for example. I don't want true stereo processing.
But maybe I'm overthinking it, and I just do a stereo node... collapse the channels... process the result... write the result back to output left and output right, with some gain compensation?
-
Force mono node?
Sometimes you know that you definitely want a node to only process in mono.
So if I do:
static constexpr int getFixChannelAmount() { return 1; };
And then later on:
template <typename T> void process(T& data) template <typename T> void process(T& data) { auto& fd = data.template as<ProcessData<getFixChannelAmount()>>(); auto block = fd.toAudioBlock(); auto* left = block.getChannelPointer(0); auto* right = block.getChannelPointer(1); numSamples = data.getNumSamples(); // change "effect" object to StereoAudioEffect // and switch to this if stereo //effect.process(left, right, numSamples); // change "effect" object to MonoAudioEffect // and switch to this if mono //effect.process(left, numSamples); }
I would have two separate effect processor classes; MonoAudioEffect and StereoAudioEffect. Essentially wrappers around some child DSP objects.
Is this a legal approach to effectively switching between mono versus stereo processing? Should I be copying the left channel to the right channel to make things consistent with the rest of scriptnode?
-
More audio export options
I know this is super boring low hanging fruit, but could we get a few extra options here? I'd really like 30 seconds, 1minute, 2 minutes, and a manual option if possible, where you can hit an on-screen stop button when you're ready.
-
RE: OSC Filter caps at 1.9khz
Looks like your base value is not fully open on the filter itself. iirc, the matrix modulator takes the base value as the maximum possible value.
Open the filter up to 20k, and then your modulation range will sweep the full range.
-
RE: Matrix Modulation Feedback
@Christoph-Hart said in Matrix Modulation Feedback:
@Orvillain now you have another reason. Also all dialogs / other inbuilt components will not scale correctly like this.
Ah cool beans. Yeah that was trivial to implement in my existing code, and actually makes the whole thing a lot simpler. Sorry, I thought the whole thing was originally a bug to do with the matrix mod controller and how it processed css - hence tagging onto this thread!
The only slight loss is, my approach was working inside HISE directly, without having to compile the plugin in order to double check zoom/rescale performance.
-
RE: Matrix Modulation Feedback
@Christoph-Hart said in Matrix Modulation Feedback:
@Orvillain wait, how do you rescale your UI? Settings.setZoomFactor() should scale all fonts correctly.
No I'm not doing that. I have a custom scaling namespace that does a bunch of math, but ultimately what it does is update the properties of any UI widge - so x, y, w, h, fontSize, etc etc.
This kind of thing:
inline function rescaleAll() { // Order of operations: // Resize interface // Re-apply panel bounds // Re-apply control bounds // Re-apply control style props (font, radii, etc.) // Reposition zoom handle // Resize the window Content.makeFrontInterface(SharedData.getScaledWidth(), SharedData.getScaledHeight()); // Panels first (so parents are sized before children) for (i = 0; i < _layout.length; i++) _applyScaledBounds(_layout[i]); // Controls: bounds then props for (i = 0; i < _controls.length; i++) { _applyScaledBounds(_controls[i]); _applyScaledProps(_controls[i]); } LogicalZoom.onRescaled(); }
Effectively, I'm scaling everything manually. Not using the automatic zoom factor way to do it. I don't have a reason other than coder ego!!
And this all works fine, but setting the font-size property in the modulation controller didn't seem to update in any of the ways I tried.
-
RE: Matrix Modulation Feedback
@Orvillain I solved my issue by making my font size a percentage of the dragger, but it was a bit weird. Everything I tried simply would not update the font size. Even doing a setStylePropertyblahblahblah call on the LAF object didn't work.
-
RE: CSS custom variables not taken by the ModulationMatrix
Might also be true for FloatingTile in general. At least, I can't get it to work!