That sounds challenging to do in Hise.
Hise is designed for virtual instruments and synthesisers. You'll need to use c++ for sure to get this to work in Hise and you'll need to do all the heavy lifting.
That sounds challenging to do in Hise.
Hise is designed for virtual instruments and synthesisers. You'll need to use c++ for sure to get this to work in Hise and you'll need to do all the heavy lifting.
@vyexx
Crystallizer is a real time granular delay.
You may not be able to do that in pure scriptnode I think you'll need to code c++
Good idea. I'll make one soon
The video is quite terrible quality, but this is the c++ node system in HISE.
How to create a custom node, as promised. I apologise in advance for the awful quality video (quite dry and boring)! But the good news is that I have some ni...
Forum (forum.hise.audio)
I think christoph would know for sure.
I did manage to do it but it had inconsistent quirks that seemed to do with the display buffer length not matching up with what I was sending it (Holding on to a longer history)
So I'm hesitant to share an incomplete solution. I'm not sure how resizing it works
You might want RMS scaling to start with.
The scriptnode wet dry template starts with a linear xfade, you can change the option to rms in the drop-down after loading in the template. This will keep volume consistent when mixing (no more volume dip at 50% wet dry).
Then, you can change the min and maximum volumes for the wet and dry mixer by altering the range on the volume nodes that the xfade is attached to.
I'd you want specific control, you can always manually control the volume of each path using scripting, and use whatever math transfer function you want to decide the volumes of the two paths.
I think it struggles in that case, unless it was coded in c++
This is a decent situation to use chat gpt.
Question it about ways to design this efficiently and robustly. It's basically a two voice system, you can think of the buffers as voices. Where you fade between them and then kill the old voice.
The thing you must makes sure of when using this xfade method, is getting it to update to the latest version. For instance if you do a bunch of small changes, it needs to finish the first crossfade, and then look to see if the current value doesn't match what it needs to be (since you may have moved it again during it's xfade). At the end of the xfade this check just needs to be inserted, and to retrigger another fade if so.
GPT will be able to outline a good design if you bug it to be really 'ideal efficient and using best practices'
Instead, can't you just Crossfade?
Keep a buffer with the old waveshaper applied, and a buffer with a new waveshaper shape applied, then linear volume mix Crossfade between the old and new buffer over 64 samples.
Basically every time you create a new shape / tweak the shape, you fade out the current buffer while fading in a new buffer that is using the new shape.
This system will give you no clicks no matter how different the waveshaper shapes are.
Ai will happily gaslight you.
It'll tell you all sorts of lies and it will often be unable to acknowledge it's own mistakes.
You can't program using AI without understanding deeply what you are trying to make, and at least how it works on a basic level.
I use AI heavily in my coding, but I always understand the algorithms I am making. The AI is simply there because it knows c++ syntax better than me.
But it will never be able to write an algorithm as good as the one in my own head. There are too many elements and moving parts that go into making DSP, an LLM is unlikely to know all the info required. And there is DSP that is still way out of reach, even if you give it all the info and math it needs it will still fail at things. Ai is particularly bad at audio programming I've found.
For fast modulation, fixed block may help, assuming that the code isn't already doing per sample updates.
For filters, oversampling can help. It's best to try with and without. Same with distortions. You won't know if you need it until you test it. High frequencies will reveal the differences.
Aliasing happens when a waveform contains high frequency content above half the sample rate.
The Oversample node runs at a high sample rate, does a low pass filter (above the audible range), and then down samples. This allows us to push aliasing higher up and then attempt to filter it out. It's usually used for removing aliasing from distortions. Aliasing results in harsh spiky digital sounding harmonics mixed in with the original sound.
Another reason to Oversample is that you simply want a higher sample rate, this is useful for 'accuracy' in the highs. High frequencies are the fast parts of a waveform and therefore will become more detailed if we have more samples. Filters become more accurate in the highs when oversampled, else they lose their shape a little. Oversampling by 2x roughly doubles cpu.
Fix block is for processing samples in smaller batches. It lowers the latency but also prevents the optimization you get by passing larger blocks of data around. Instead, the program now has to send lots of smaller batches to process, which means you use more CPU.
Many DSP programs use 'per block' processing, for example a compressor may calculate gain for every block instead of doing it per sample. So making the blocks smaller will mean you get higher detail.
However if your node is already doing things per-sample it practically makes no difference.
@LozPetts
Idk about the stock one, but It's something I've done using c++ in hise
Ah yeah yeah.
That's fine then, it's not an issue in my designs I don't think. I end up creating systems for the sample accurate parameters which smooth and then update the internal variables of the dsp. And sometimes I create per sample paths for when modulation is happening.
But that's very interesting, i never thought about it that way, you could process left and right in one iteration if frame processing. For sample accurate changes that indeed is more efficient. I never thought of that.
Edit: oh that latency thing is interesting though I'll have to investigate. Although I've never used a routing node.
About frame processing, why do you recommend it here? I've yet to find many use cases for frame processing aside from cases where you need access to both channels simultaneously to process each sample... You said something to do with smoothing? I'm just curious
The default c++ node has frame processing set up, same as the snex node. Yet I always end up just using process in 95% of my dsp because I can vectorize each channel and do things much more efficiently when not interleaving them.
Can we have a better way to include XSIMD in a C++ node please?
I find myself using XSIMD more and more lately in my c++ nodes (for Filters), and currently every developer I hand my code to, I have to get them to rebuild hise with the following edit to hi_tools:
#include "hi_neural/RTNeural/modules/xsimd/xsimd.hpp"
It would be really helpful if this could be included via an easier route! Either included in hi_tools by default, or via a flag, or via an include in the c++ nodes themselves.
Would that be possible? It's a change that basically only I would ask for, but still I will ask
AP mastering is not to be believed, in his tests he has bad luck and chooses many plugins that have been half assed and are not proper analog circuit models, wheras, there are many good analog model products out there. Either that, or he downplays the differences. From discussing with him, his stance is not that analog model products are a sham anyway, his stance is that he prefers digital tools which are predictable and don't cause distortion. He admitted to creating his videos in such a way to clickbait for the algorithm / be polarizing. That's where the whole 'scam' thing came from, although he is right that many analog modelling plugins are not very good and don't match the real devices very well.
That out of the way, machine learning for analog simulation is mostly about creating digital models of individual circuit components, or small circuits. For example, you can train a neural network on the response of an EC66 triode tube and it will create an accurate digital model which runs much more efficiently than doing it mathematically by brute force, and may be more stable (so you may be able to eliminate recursive solvers by using Neural). This can be done for whole circuits too (but you run into aliasing issues). Then you can insert the AI model into your regular circuit simulation. The boon is more accuracy and less computational cost, all at once.
The other thing you can do is to try and train an AI to predict an entire device. For example you have a gigantic dataset of gain reduction from an analog compressor, for all it's knob settings. The AI can then be trained to match this dataset and produce the correct curves, learning the nuances of the device.
It's easier said than done. Many problems arise and getting the AI to be both correct and run efficiently are two huge challenges.
There are research papers on the topic just search for the word 'differentiable' alongside analog modelling, or a type of device eg compressor.
Might be of interest to you, but I've been working on c++ WT engine for Hise. It works and includes all the mentioned features.
The engine is not available yet to the public though, it's still being developed as a commission.
As long as the wavetable synth does upsampling after setting the playback position it should work I think. It depends a little on how the WT synth handles it's antialiasing.