I'm pretty sure you can tap out of the global cable value on the Hise gui and use that value to set the value of your GUI slider.
That will work right?
I'm pretty sure you can tap out of the global cable value on the Hise gui and use that value to set the value of your GUI slider.
That will work right?
We do have it in c++ nodes (function called prepare() gets loaded with sample rate when it changes)
If you need Hise to know about sr changes you can make a c++ node that then uses a global cable to trigger a hise global cable callback when sr changes...
But yeah that's obviously a workaround.
Maybe Christoph can add a Hise function for this so that there is an easier way.
But if its something you absolutely need, I can show you how to set it up.
That's right! Sorry I didn't mention: File player is how you'd do a playback of audio in scriptnode.
It does process the incoming audio.
He said he wants noise audio files to react to the volume on incoming audio. That's the effect he's describing I believe.
The wording of the post was misleading / vague
Effect chain:
Input audio -> measure activity / volume -> apply that to a noise file / trigger noise sample -> sum both to output
Unless I've misunderstood!
@David-Healey
Ah no, I think he's describing an effect that adds noise ontop of existing audio.
It's an effect not an instrument.
Oh yeah really possible.
And the way you should go about it depends on how complex you want the algorithm to be.
The simplest way to achieve what you want, is to use scriptnode.
You can use an envelope follower to track the volume level of incoming audio, and then use that to modulate the volume of a looper that is continuously playing your noise loop.
I don't know the exact chain: I don't use scriptnode much. But I know that this is possible without too much work involved.
If you're wanting to make something more realistic, where you've got more control over the crackles and want to build a system where you can trigger noises to play at certain times and under certain conditions, I'd personally do that kind of thing using a c++ third party node. You'd code a node in c++ so that it can load and play back samples (using an external data sample map) and has all the input detection and crackle logic, and then in hise you just have this single effect node that does the entire thing you're after.
You get ultimate control this way. But if you've not done c++ effects before there is some learning to do.
Your description is a little confusing, can you explain what exactly it is you're trying to build?
No clue I'm afraid! I've yet to really get into the fray of Neural Network training yet, I've only done rough experiments.
There's a discord group for RTNeural you can find online. Setup questions can be asked there! May be useful.
RT neural is just a library that can run small neural networks that you've already trained (efficient real time inference is the point of RTNeural [stands for real time neural]). I'm not sure it has any tools for doing training itself, unless it's changed a lot recently.
You train a neural network using a different training pipeline like tensorflow or psytorch, which results in a set of weights in a form that other Neural software can load, you can then load that data into RTNeural and it uses the weights data you've given it to perform the neural network in real time.
@boim98208 I think it can be done with the event programming. So that when you note on, you ignore that note, and create instead your own artificial note event but you add some fine tune pitch to the note message.
Right but the math works the same way.
All you need to do is find the right hise functions in the API for the math calls.
Haha, we aren't there yet.
AI is a fool when it comes to Analog Modelling.
But maybe someday.
This is pseudo code.
It's something like this
// Equal-power wet/dry crossfade (pseudocode)
// knob range
value = 0 to 127
// normalize
t = value / 127.0
// equal-power law
// sin/cos keep total power constant
wet_lin = sin(t * PI * 0.5)
dry_lin = cos(t * PI * 0.5)
// linear → dB
// dB = 20 * log10(gain)
// log10(x) = ln(x) / ln(10)
wet_db = 20 * ln(wet_lin) / ln(10)
dry_db = 20 * ln(dry_lin) / ln(10)
// clamp near zero (avoid -inf)
if (wet_lin < 0.00001) wet_db = -100
if (dry_lin < 0.00001) dry_db = -100
// apply
WET.Gain = wet_db
DRY.Gain = dry_db
/*
Behaviour:
t=0
dry=1.0 (0 dB)
wet=0.0 (-inf)
t=0.5
dry≈0.707 (-3 dB)
wet≈0.707 (-3 dB)
Summed signal power stays constant across the crossfade,
so the centre position does not sound quieter.
This is the kind of question that AI can help you understand.
Ask chat gpt about it until you get why it works!
Wet / Dry balance is done using a sin / cosine curve usually.
You don't need to do any RMS calculations for an equal power curve.
@DanH
Not supported.
I'm assuming at this point it must be troublesome to add scroll wheel detection, because it's been requested over the years multiple times.
@Alatar said in Audio Engine Sample Rate?:
Is HISE using an internal sample rate, which is different from the host sample rate?
Hise plugins run at the DAW sample rate.
The Hise app itself runs at the sample rate you choose in the settings.
Resampling may occur somewhere else on your PC, but Hise is trying to match whatever it's told.
The sample rate conversion is immediate. It happens during runtime.
The rate at which the sample is played back (in other words the pitch) takes project vs file sample rate into consideration.
The sampler plays back the file at the right speed for whatever sample rate the Daw reports to the plugin.
After all, a sampler that plays back sounds at a different pitch is a resampler by definition. So all the work can be done at the lowest level in the sampler, outputting the samples at the correct sample rate (speed) to begin with.