Idk about the stock one, but It's something I've done using c++ in hise

Posts
-
RE: Possible to feed Granulator audio from outputs of my plugin?
-
RE: Remake stock Phaser Module in Scriptnode
Ah yeah yeah.
That's fine then, it's not an issue in my designs I don't think. I end up creating systems for the sample accurate parameters which smooth and then update the internal variables of the dsp. And sometimes I create per sample paths for when modulation is happening.But that's very interesting, i never thought about it that way, you could process left and right in one iteration if frame processing. For sample accurate changes that indeed is more efficient. I never thought of that.
Edit: oh that latency thing is interesting though I'll have to investigate. Although I've never used a routing node.
-
RE: Remake stock Phaser Module in Scriptnode
About frame processing, why do you recommend it here? I've yet to find many use cases for frame processing aside from cases where you need access to both channels simultaneously to process each sample... You said something to do with smoothing? I'm just curious
The default c++ node has frame processing set up, same as the snex node. Yet I always end up just using process in 95% of my dsp because I can vectorize each channel and do things much more efficiently when not interleaving them.
-
Add XSIMD to hi_tools by default?
Can we have a better way to include XSIMD in a C++ node please?
I find myself using XSIMD more and more lately in my c++ nodes (for Filters), and currently every developer I hand my code to, I have to get them to rebuild hise with the following edit to hi_tools:
#include "hi_neural/RTNeural/modules/xsimd/xsimd.hpp"
It would be really helpful if this could be included via an easier route! Either included in hi_tools by default, or via a flag, or via an include in the c++ nodes themselves.
Would that be possible? It's a change that basically only I would ask for, but still
I will ask
-
RE: machine learning to capture analog tech
AP mastering is not to be believed, in his tests he has bad luck and chooses many plugins that have been half assed and are not proper analog circuit models, wheras, there are many good analog model products out there. Either that, or he downplays the differences. From discussing with him, his stance is not that analog model products are a sham anyway, his stance is that he prefers digital tools which are predictable and don't cause distortion. He admitted to creating his videos in such a way to clickbait for the algorithm / be polarizing. That's where the whole 'scam' thing came from, although he is right that many analog modelling plugins are not very good and don't match the real devices very well.
That out of the way, machine learning for analog simulation is mostly about creating digital models of individual circuit components, or small circuits. For example, you can train a neural network on the response of an EC66 triode tube and it will create an accurate digital model which runs much more efficiently than doing it mathematically by brute force, and may be more stable (so you may be able to eliminate recursive solvers by using Neural). This can be done for whole circuits too (but you run into aliasing issues). Then you can insert the AI model into your regular circuit simulation. The boon is more accuracy and less computational cost, all at once.
The other thing you can do is to try and train an AI to predict an entire device. For example you have a gigantic dataset of gain reduction from an analog compressor, for all it's knob settings. The AI can then be trained to match this dataset and produce the correct curves, learning the nuances of the device.
It's easier said than done. Many problems arise and getting the AI to be both correct and run efficiently are two huge challenges.
There are research papers on the topic just search for the word 'differentiable' alongside analog modelling, or a type of device eg compressor.
-
RE: Wavetable Synth Hard Sync?
Might be of interest to you, but I've been working on c++ WT engine for Hise. It works and includes all the mentioned features.
The engine is not available yet to the public though, it's still being developed as a commission. -
RE: Wavetable Synth Hard Sync?
As long as the wavetable synth does upsampling after setting the playback position it should work I think. It depends a little on how the WT synth handles it's antialiasing.
-
RE: Multiple Global Data Cables - only the first one gets a runtime target
Yep that'll be it.
I actually wish it worked differently, but currently scriptnode c++ synth nodes only call process() per active voice. So if there are no voices it won't call process (or subsequently process frame).
Additionally say there are 3 voices playing, that will mean your process script gets called 3 times as frequently... Each voice will run it's own process.
You can check which voice the process() is currently running for, using polydata, which allows you to do different processing per voice
-
RE: Multiple Global Data Cables - only the first one gets a runtime target
It should work fine. I think it works. I might be wrong though, I haven't used them in a synth recently.
Process() only runs when voices are playing in a scriptnode synth. But that shouldn't affect the callbacks.
-
RE: Multiple Global Data Cables - only the first one gets a runtime target
Oh yeah i made this mistake a while ago XD
I use a template for my c++ nodes and it has a helper baked in with temp IDs for the cables. I forgot you have to generate the IDs from the project!
-
RE: need a perfect Pitch Shifter
This is Impossible if you are asking for real time pitch shifting.
think about it:
"It needs to 'play faster' when the pitch is transposed up"
^ For that to work, it needs to read the future. This situation only works non-realtime, where the entire sample is stored, and so the program can literally look into the future.
But in realtime, the future hasn't happened yet, so it can't possibly play the sound faster.You get the opposite problem when pitch is down. You have to store in memory everything that has happened, and slowly play it back. In this situation you will never catch up to the future if you don't switch to playing faster to catch up. If you keep the pitch down, it will have to store more and more until potentially all your memory or storage is used up.
The only way to cheat this fact is by introducing lots of latency, and then modulating pitch so that it goes forwards in time and then backwards by the same amount. Alternating between slow and fast means that it pushes and pulls and so doesn't endlessly read into the future (which is impossible without time travel, or waiting for the future to happen before playing audio). What I just described is how pitch modulation on delay plugins, and on tape wow plugins works (eg baby audio drift, Valhalla delay). Delay line modulation.
Realtime pitch shifting methods that can 'stay' transposed up or down include: Granular, FFT, Autocorrelation, etc.
And these methods are generally known for the metallic artefacts you described.You need to think carefully about how your pitch modulation plugin works.
Because depending on what it needs to be able to do, there will be a 'best method' to go about it. -
RE: Is it possible to create a midi note from ScriptNode/c++ node and direct it to a synth in the module tree?
oh yeah, no, in my case I was sending data that was one and a half million floats long down a cable. Although, it never actually caused a cpu spike, so maybe I didn't have to throttle it.
I guess I'm just being careful. Then again, that was async, which is probably why that works, it's already background thread I'm guessing. -
RE: How does a custom c++ interface with one of its display buffers?
Yeah it's possible. Take a look at the source for external data
You can write to display buffers. But we warned, these are ring buffers and so personally I've had issues with it wrapping and having the wrong size, stuff like that. Christoph can probably set you straight. But the methods for writing into the data slot are in this header I think!
-
RE: Global Cables Don't Work when compiled
They said it would be better to overhaul using bgfx directly xD
Yeah, not a small task probably. Ah well. -
RE: Global Cables Don't Work when compiled
mmm, the people in the Audio Programmer Discord aren't recommending it XD
It does look like a big amount of pain -
RE: Global Cables Don't Work when compiled
oof, yeah that would be really great wouldn't it.
I haven't, but i'll look into it. I may have to pester him about it directly : )
I've been having some silly dependency issues, even getting it work normally. I'm new to cmake and it's not super fun to learn it xDI'll see if I can get that to happen.
-
RE: Global Cables Don't Work when compiled
Based.
I've been gradually working my way through the scriptnode / snex source.
And the amount of stuff in there is incredible.
It really makes me think that so much is possible. You've built out so many nice systems indeedI started my own fork of juce to try and get around graphics limitations in Hise (using visage), but looking at your source, I could never flesh out something so expansive
makes me want to just wait for an eventual future update haha
-
RE: Is it possible to create a midi note from ScriptNode/c++ node and direct it to a synth in the module tree?
Wait, we can do synchronous global cables?
How fast will this be? We aren't talking sample accurate audio thread type fast are we?Someone told me recently that cables are not high priority speed wise.
I decided they aren't for anything fast anyway due to the copying involved, except for midi, it kind of makes sense, I can see it being powerful for c++ sequencers, so I'm curious.
Then again, for normal sequencers we can just send an array back and Hise can play from it. I guess this is only for sequencing that needs to be really realtime for some reason
-
RE: Summing plugins, snake oil or useful?
Yes, I'm probably speaking about my own musical interests
much of my own music is either very digital intentionally, or hyper analog where I need 50's equipment.
Transistor based gear isn't used at all in my music so maybe I'm disregarding it just because of that.Not to mention it's freaking hard to model multiple transistors! Lol