HISE Logo Forum
    • Categories
    • Register
    • Login

    What is the process for writing my own module (not scriptnode)

    Scheduled Pinned Locked Moved C++ Development
    52 Posts 11 Posters 1.7k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Christoph HartC
      Christoph Hart @griffinboy
      last edited by

      At the moment you can break nodes if you drag them into a project and try to compile them, you need to actually go through the process of using the create c++ode feature, and then compile (in order that hise creates all the correct files behind the scenes) and then you replace the node code with the external node...

      The only thing that is needed is the node_properties.json file in the Third party node folder - if you copy that file (or manually merge the JSON objects) you can just paste in the files.

      Also any nodes (such as my recent one here) that uses global cables, will only work in the project where the cables exist exactly the same way,

      The global cables are identified through their hashed ID (that's what that boilerplate code generator is doing) so if you use the same cable names they will work across projects.

      griffinboyG 1 Reply Last reply Reply Quote 1
      • griffinboyG
        griffinboy @Christoph Hart
        last edited by

        @Christoph-Hart

        Ah so I was being presumptive!
        Thanks for clearing this up.

        I guess the only manual things would be to make sure the JSON is up to date, and with GCs, making sure that the 'cable_manager_t' that we subclass the node from is aligned with the users project.

        1 Reply Last reply Reply Quote 0
        • DabDabD
          DabDab
          last edited by DabDab

          @griffinboy I am waiting for your all tutorials. 😊

          Bollywood Music Producer and Trance Producer.

          1 Reply Last reply Reply Quote 1
          • ustkU
            ustk @griffinboy
            last edited by ustk

            @griffinboy said in What is the process for writing my own module (not scriptnode):

            @Christoph-Hart

            no that's next. I'm either going to use the Vital or Serum method. Serum's is simpler so I might go with that. Precomputed mip maps using fir filters. Easier to program but it will use a lot of memory... Maybe I'll end up switching later down the line to a realtime FFT based mip map method like vital!

            Please pardon my dumbness, but I don't really understand mip maps of FIR especially in this context.
            To me, it is useful when the waveform is "as is" and not user modifiable, so it is stored with a limited bandwidth upfront.
            But if you draw the WFs realtime, how would you get benefit from a mip maps of FIRs?
            In this case, wouldn't dropping a single FIR (or any aniti-aliasing filter) at 20kHz (or below the current Nyquist at SR anyway...) be simpler and still effective?

            Got a nice lecture on the topic? ☺

            Can't help pressing F5 in the forum...

            griffinboyG 1 Reply Last reply Reply Quote 0
            • griffinboyG
              griffinboy @ustk
              last edited by griffinboy

              @ustk

              To start off, putting a single filter at 20k (or Nyquist) on the drawn waveform will not solve the aliasing, because the signal has already been sampled, and therefore the aliasing has already happened. If we oversample the signal by a large amount and then do what you said, yes, this method will work. However now you have a sampler + filter running at 8x oversampling using a ton of cpu. And this can't be precomputed (filtered once only), because if the user plays a higher note, harmonics will pass nyquist again and cause aliasing.

              Therefore one solution is to precompute anti aliased versions of the sample at different pitches, removing high frequency content that would alias for specific note ranges. This is called mip mapping. We are essentially decimating the signal (removing high frequency details) for pitches that we play higher up.

              I only mentioned fir filters because you can use them also to reconstruct the continuous signal using a certain method detailed in the paper titled 'quest for the perfect resampler'.

              Another popular method is to use FFT and cancel out the high frequency bins (vital / serum).

              ustkU 1 Reply Last reply Reply Quote 1
              • ustkU
                ustk @griffinboy
                last edited by ustk

                @griffinboy Thanks for the clarification!

                So here we're talking about filtering the drawn waveform, but I was simply referring to dropping a filter after the audio buffer just to cut out those high freqs.
                Of course, oversampling always helps, but as you said at a CPU cost that is far from being negligible/ideal...

                So each time you draw a WF, is you intent to both filter the drawing AND apply a FIR to the buffers?

                As for the FFT bins, this something I was thinking about recently.
                I understand the cost in terms of latency introduced by an FFT, cleaning the bins, and reverse FFT isn't something we want for a sampler/realtime instrument.
                What is your thought on this for an FX, since we can just report whatever latency we have to the host?
                There are different ways to avoid/reduce aliasing, such as usual OS, or more complicated anti-derivative calculation. But since I've never encountered (yet?) such a method implementing a "simple" FFT bin reduction, there might probably be a reason I haven't yet thought about...

                Can't help pressing F5 in the forum...

                griffinboyG 2 Replies Last reply Reply Quote 0
                • griffinboyG
                  griffinboy @ustk
                  last edited by griffinboy

                  @ustk

                  Right sure.
                  Yes, you could drop a filter on the audio, but you misunderstand how aliasing works. Aliasing reflects downwards, creating low harmonics. It doesn't just create high harmonics. So even lowpass filtering the signal won't remove aliasing.

                  You NEED to oversample if you want to filter out aliasing. It's not a case of it 'helping' but it's a way of raising the nyquist so that when you apply the filter, there are no alias harmonics already present, having reflected downwards.

                  Also, filtering the drawn waveform is the same as filtering the buffer. The waveform is the buffer... Samples along the x axis, y value is the value of each sample.

                  O 1 Reply Last reply Reply Quote 1
                  • griffinboyG
                    griffinboy @ustk
                    last edited by griffinboy

                    @ustk

                    So vital works like this:

                    Precompute the FFT, store the waveform in the frequency domain.
                    When the user presses a note, clear harmonics that would alias, convert back to time domain (inverse fft) and play waveform. You only need to calculate the inverse fft once every time the pitch changes, or the user plays a new note. You could also use a threshold similar to mipmapping, say, ignore pitch bends until they go over a certain range. So then you don't do inverse fft constantly when stuff like vibrato is happening.

                    For antiderivative techniques it becomes even more complicated for pitch varying signals you need to crossfade between mipmaps to avoid clicks.

                    ustkU 1 Reply Last reply Reply Quote 1
                    • ustkU
                      ustk @griffinboy
                      last edited by

                      @griffinboy Thanks for all that!
                      I do understand how aliasing works, nyquist, reflected harmonics... But I didn't understand how to prevent it in this particular context until you shine your knowledge on me! ☀ 😎
                      I feel a few percents less idiot now ☺

                      Can't help pressing F5 in the forum...

                      1 Reply Last reply Reply Quote 2
                      • griffinboyG
                        griffinboy @Christoph Hart
                        last edited by griffinboy

                        @Christoph-Hart

                        By the way, while we are on the topic of Wavetable synthesis, what is your process for scanning through a Wavetable?

                        Are you doing anything to mitigate large jumps / clicks discontinuities when scanning through different frames? Do you generate any in-between frames using interpolation, or is it simply a case of 'snapping' to the next frame and allowing the realtime interpolator to interpolate between the previous and current sample (belonging to the previous and current frame)?

                        I'm not in the loop when it comes to the 'common' approach. I'm going to do my own analysis but I thought I'd ask!

                        1 Reply Last reply Reply Quote 0
                        • O
                          Orvillain @griffinboy
                          last edited by

                          @griffinboy said in What is the process for writing my own module (not scriptnode):

                          @ustk

                          Right sure.
                          Yes, you could drop a filter on the audio, but you misunderstand how aliasing works. Aliasing reflects downwards, creating low harmonics. It doesn't just create high harmonics. So even lowpass filtering the signal won't remove aliasing.

                          You NEED to oversample if you want to filter out aliasing. It's not a case of it 'helping' but it's a way of raising the nyquist so that when you apply the filter, there are no alias harmonics already present, having reflected downwards.

                          Also, filtering the drawn waveform is the same as filtering the buffer. The waveform is the buffer... Samples along the x axis, y value is the value of each sample.

                          @griffinboy out of interest... what about avoiding aliasing when downsampling?? I had assumed that a solid biquad lowpass would cover this, but maybe not??

                          griffinboyG 1 Reply Last reply Reply Quote 0
                          • griffinboyG
                            griffinboy @Orvillain
                            last edited by griffinboy

                            @Orvillain

                            At the moment I'm using custom fir filters and that seems to work. Again, following the design presented in 'quest for the perfect resampler'

                            HISEnbergH 1 Reply Last reply Reply Quote 0
                            • HISEnbergH
                              HISEnberg @griffinboy
                              last edited by

                              @griffinboy said in What is the process for writing my own module (not scriptnode):

                              quest for the perfect resampler

                              Found it online: Quest for the Perfect Resampler. Thanks for the suggestion @griffinboy

                              Graham Wakefield's book Generating Sound and Organizing Time also does a good job of covering wavetable synthesis and MipMapping in Max MSP's Gen~ environment, for those who are interested in the topic and need a more "digestible" explanation.

                              1 Reply Last reply Reply Quote 1
                              • HISEnbergH
                                HISEnberg @griffinboy
                                last edited by

                                @griffinboy My personal vision for this would be similar to Max MSP which allows you to install external packages, sort of like expansion packs. This would give full credits to the author of the external nodes (including the licensing structure) and allow building custom libraries for specific purposes. I envisage a system like this as HISE develops and grows to include more developers looking to do different things within the HISE framework.

                                Christoph HartC 1 Reply Last reply Reply Quote 0
                                • Christoph HartC
                                  Christoph Hart @HISEnberg
                                  last edited by

                                  @HISEnberg empty room problem, but as soon as there starts to be demand for it I'll think about a good solution.

                                  1 Reply Last reply Reply Quote 3
                                  • O
                                    Orvillain @Christoph Hart
                                    last edited by Orvillain

                                    @Christoph-Hart

                                    Hey Christoph, when trying to compile your script here, I get:

                                    > Create files
                                    > Sorting include dependencies
                                    > Copying third party files
                                    > Compiling dll plugin
                                    Re-saving file: C:\Users\avern\Desktop\blah\DspNetworks\Binaries\AutogeneratedProject.jucer
                                    Finished saving: Visual Studio 2022
                                    Finished saving: Xcode (macOS)
                                    Finished saving: Linux Makefile
                                    Compiling 64bit  blah ...
                                    MSBuild version 17.10.4+10fbfbf2e for .NET Framework
                                    
                                      Main.cpp
                                      include_hi_dsp_library_01.cpp
                                      include_hi_dsp_library_02.cpp
                                      include_hi_tools_01.cpp
                                      include_hi_tools_02.cpp
                                      include_hi_tools_03.cpp
                                      include_juce_audio_formats.cpp
                                      include_juce_core.cpp
                                      include_juce_data_structures.cpp
                                      include_juce_dsp.cpp
                                      include_juce_events.cpp
                                      include_juce_graphics.cpp
                                    !C:\Users\avern\Desktop\blah\DspNetworks\ThirdParty\data_node.h(75,4): error C2593: 'operator []' is ambiguous [C:\Users\avern\Desktop\blah\DspNetworks\Binaries\Builds\VisualStudio2022\blah_DynamicLibrary.vcxproj]
                                    !C:\Users\avern\Desktop\blah\DspNetworks\ThirdParty\data_node.h(76,4): error C2593: 'operator []' is ambiguous [C:\Users\avern\Desktop\blah\DspNetworks\Binaries\Builds\VisualStudio2022\blah_DynamicLibrary.vcxproj]
                                    !C:\Users\avern\Desktop\blah\DspNetworks\ThirdParty\data_node.h(77,4): error C2593: 'operator []' is ambiguous [C:\Users\avern\Desktop\blah\DspNetworks\Binaries\Builds\VisualStudio2022\blah_DynamicLibrary.vcxproj]
                                    !C:\Users\avern\Desktop\blah\DspNetworks\ThirdParty\data_node.h(78,4): error C2593: 'operator []' is ambiguous [C:\Users\avern\Desktop\blah\DspNetworks\Binaries\Builds\VisualStudio2022\blah_DynamicLibrary.vcxproj]
                                    !C:\Users\avern\Desktop\blah\DspNetworks\ThirdParty\data_node.h(81,4): error C2593: 'operator []' is ambiguous [C:\Users\avern\Desktop\blah\DspNetworks\Binaries\Builds\VisualStudio2022\blah_DynamicLibrary.vcxproj]
                                    	C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.40.33807\include\bit(11): warning STL4038: The contents of <bit> are available only with C++20 or later. [C:\Users\avern\Desktop\blah\DspNetworks\Binaries\Builds\VisualStudio2022\blah_DynamicLibrary.vcxproj]
                                    
                                    

                                    The lines causing this error are:

                                    hise::JSONObject obj;
                                    
                                    // write the values into the JSON object
                                    obj["magnitude"] = magnitude;
                                    obj["RMS"] = rms;
                                    obj["length"] = length;
                                    obj["numChannels"] = channels;
                                    
                                    // just attach the current knob position so you know it's working.
                                    obj["knobValue"] = value;
                                    

                                    I have just update to the latest develop. This SHA:
                                    03c420c1d12f7a4457a8b497a6b78bc49d250e85

                                    Explicitly casting like this did result in the compile working:

                                    			// write the values into the JSON object
                                    			obj[String("magnitude")] = magnitude;
                                    			obj[String("RMS")] = rms;
                                    			obj[String("length")] = length;
                                    			obj[String("numChannels")] = channels;
                                    			
                                    
                                    			// just attach the current knob position so you know it's working.
                                    			obj[String("knobValue")] = value;
                                    
                                    griffinboyG 1 Reply Last reply Reply Quote 0
                                    • griffinboyG
                                      griffinboy @Orvillain
                                      last edited by

                                      @Orvillain

                                      Oh yeah yeah, I found faults with the provided script too. I had to do some casting. I think that's correct.

                                      1 Reply Last reply Reply Quote 1
                                      • First post
                                        Last post

                                      17

                                      Online

                                      1.7k

                                      Users

                                      11.8k

                                      Topics

                                      102.6k

                                      Posts