What is the process for writing my own module (not scriptnode)
-
I think a third party node collection (like we did with LAF), would be very useful.
-
I've been wanting to.
But the interface for 3rd party nodes is still a bit shaky.
For instance I share things on the forum, but there is setup required for each of the nodes, and these are simple nodes.If we could get a Hise feature that could allow the safe importing of nodes into a project that would be neat. At the moment you can break nodes if you drag them into a project and try to compile them, you need to actually go through the process of using the create c++ode feature, and then compile (in order that hise creates all the correct files behind the scenes) and then you replace the node code with the external node...
Also any nodes (such as my recent one here) that uses global cables, will only work in the project where the cables exist exactly the same way, because the c++ code inside the node n references the hise cables directly. We don't have agnostic input output ports for data in the nodes, excepting external data.
Maybe if we could pass data through external data that would provide a more general interface... I don't know.
But as it stands it's kind of difficult to share nodes because of these things. Although a collection of examples would probably be useful nonetheless.
-
At the moment you can break nodes if you drag them into a project and try to compile them, you need to actually go through the process of using the create c++ode feature, and then compile (in order that hise creates all the correct files behind the scenes) and then you replace the node code with the external node...
The only thing that is needed is the
node_properties.json
file in the Third party node folder - if you copy that file (or manually merge the JSON objects) you can just paste in the files.Also any nodes (such as my recent one here) that uses global cables, will only work in the project where the cables exist exactly the same way,
The global cables are identified through their hashed ID (that's what that boilerplate code generator is doing) so if you use the same cable names they will work across projects.
-
Ah so I was being presumptive!
Thanks for clearing this up.I guess the only manual things would be to make sure the JSON is up to date, and with GCs, making sure that the 'cable_manager_t' that we subclass the node from is aligned with the users project.
-
@griffinboy I am waiting for your all tutorials.
-
@griffinboy said in What is the process for writing my own module (not scriptnode):
no that's next. I'm either going to use the Vital or Serum method. Serum's is simpler so I might go with that. Precomputed mip maps using fir filters. Easier to program but it will use a lot of memory... Maybe I'll end up switching later down the line to a realtime FFT based mip map method like vital!
Please pardon my dumbness, but I don't really understand mip maps of FIR especially in this context.
To me, it is useful when the waveform is "as is" and not user modifiable, so it is stored with a limited bandwidth upfront.
But if you draw the WFs realtime, how would you get benefit from a mip maps of FIRs?
In this case, wouldn't dropping a single FIR (or any aniti-aliasing filter) at 20kHz (or below the current Nyquist at SR anyway...) be simpler and still effective?Got a nice lecture on the topic?
-
To start off, putting a single filter at 20k (or Nyquist) on the drawn waveform will not solve the aliasing, because the signal has already been sampled, and therefore the aliasing has already happened. If we oversample the signal by a large amount and then do what you said, yes, this method will work. However now you have a sampler + filter running at 8x oversampling using a ton of cpu. And this can't be precomputed (filtered once only), because if the user plays a higher note, harmonics will pass nyquist again and cause aliasing.
Therefore one solution is to precompute anti aliased versions of the sample at different pitches, removing high frequency content that would alias for specific note ranges. This is called mip mapping. We are essentially decimating the signal (removing high frequency details) for pitches that we play higher up.
I only mentioned fir filters because you can use them also to reconstruct the continuous signal using a certain method detailed in the paper titled 'quest for the perfect resampler'.
Another popular method is to use FFT and cancel out the high frequency bins (vital / serum).
-
@griffinboy Thanks for the clarification!
So here we're talking about filtering the drawn waveform, but I was simply referring to dropping a filter after the audio buffer just to cut out those high freqs.
Of course, oversampling always helps, but as you said at a CPU cost that is far from being negligible/ideal...So each time you draw a WF, is you intent to both filter the drawing AND apply a FIR to the buffers?
As for the FFT bins, this something I was thinking about recently.
I understand the cost in terms of latency introduced by an FFT, cleaning the bins, and reverse FFT isn't something we want for a sampler/realtime instrument.
What is your thought on this for an FX, since we can just report whatever latency we have to the host?
There are different ways to avoid/reduce aliasing, such as usual OS, or more complicated anti-derivative calculation. But since I've never encountered (yet?) such a method implementing a "simple" FFT bin reduction, there might probably be a reason I haven't yet thought about... -
Right sure.
Yes, you could drop a filter on the audio, but you misunderstand how aliasing works. Aliasing reflects downwards, creating low harmonics. It doesn't just create high harmonics. So even lowpass filtering the signal won't remove aliasing.You NEED to oversample if you want to filter out aliasing. It's not a case of it 'helping' but it's a way of raising the nyquist so that when you apply the filter, there are no alias harmonics already present, having reflected downwards.
Also, filtering the drawn waveform is the same as filtering the buffer. The waveform is the buffer... Samples along the x axis, y value is the value of each sample.
-
So vital works like this:
Precompute the FFT, store the waveform in the frequency domain.
When the user presses a note, clear harmonics that would alias, convert back to time domain (inverse fft) and play waveform. You only need to calculate the inverse fft once every time the pitch changes, or the user plays a new note. You could also use a threshold similar to mipmapping, say, ignore pitch bends until they go over a certain range. So then you don't do inverse fft constantly when stuff like vibrato is happening.For antiderivative techniques it becomes even more complicated for pitch varying signals you need to crossfade between mipmaps to avoid clicks.
-
@griffinboy Thanks for all that!
I do understand how aliasing works, nyquist, reflected harmonics... But I didn't understand how to prevent it in this particular context until you shine your knowledge on me!
I feel a few percents less idiot now -
By the way, while we are on the topic of Wavetable synthesis, what is your process for scanning through a Wavetable?
Are you doing anything to mitigate large jumps / clicks discontinuities when scanning through different frames? Do you generate any in-between frames using interpolation, or is it simply a case of 'snapping' to the next frame and allowing the realtime interpolator to interpolate between the previous and current sample (belonging to the previous and current frame)?
I'm not in the loop when it comes to the 'common' approach. I'm going to do my own analysis but I thought I'd ask!
-
@griffinboy said in What is the process for writing my own module (not scriptnode):
Right sure.
Yes, you could drop a filter on the audio, but you misunderstand how aliasing works. Aliasing reflects downwards, creating low harmonics. It doesn't just create high harmonics. So even lowpass filtering the signal won't remove aliasing.You NEED to oversample if you want to filter out aliasing. It's not a case of it 'helping' but it's a way of raising the nyquist so that when you apply the filter, there are no alias harmonics already present, having reflected downwards.
Also, filtering the drawn waveform is the same as filtering the buffer. The waveform is the buffer... Samples along the x axis, y value is the value of each sample.
@griffinboy out of interest... what about avoiding aliasing when downsampling?? I had assumed that a solid biquad lowpass would cover this, but maybe not??
-
At the moment I'm using custom fir filters and that seems to work. Again, following the design presented in 'quest for the perfect resampler'
-
@griffinboy said in What is the process for writing my own module (not scriptnode):
quest for the perfect resampler
Found it online: Quest for the Perfect Resampler. Thanks for the suggestion @griffinboy
Graham Wakefield's book Generating Sound and Organizing Time also does a good job of covering wavetable synthesis and MipMapping in Max MSP's Gen~ environment, for those who are interested in the topic and need a more "digestible" explanation.
-
@griffinboy My personal vision for this would be similar to Max MSP which allows you to install external packages, sort of like expansion packs. This would give full credits to the author of the external nodes (including the licensing structure) and allow building custom libraries for specific purposes. I envisage a system like this as HISE develops and grows to include more developers looking to do different things within the HISE framework.
-
@HISEnberg empty room problem, but as soon as there starts to be demand for it I'll think about a good solution.