HISE Logo Forum
    • Categories
    • Register
    • Login

    Simple ML neural network

    Scheduled Pinned Locked Moved General Questions
    134 Posts 18 Posters 11.0k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      ccbl
      last edited by

      Sorry to hijack this thread potentially. I wonder if it would be worth implementing https://github.com/sdatkinson/NeuralAmpModelerCore

      NAM has been gathering a lot of steam recently, the training process for new models is really easy to do and IMO it produces the best sounding models of gear to date, at least gear without time constants. It would be a great way to produce amp sims, or even just add a really accurate post sound processing option in synths and virtual instruments. Imagine an E-piano or organ etc with a great tube amp drive processor.

      LindonL 1 Reply Last reply Reply Quote 1
      • LindonL
        Lindon @ccbl
        last edited by Lindon

        @ccbl So when I was looking at using ML to do Amp Sims I looked at the implementation you point at.

        The important thing to remember when looking at this stuff is to realise that nearly every implementation(including the one you point at) are "snapshots" of an amp in a given state (controls set to a given position), yes they tend to do the non-linearities better than say an IR would, but what they dont do is allow you to meaningfully change settings on the amp sim (pre-amp, treble boost, mid, bass, etc. etc.) and get back the non-linearities associated with that combination of control settings.

        What they do is post-process(or pre-process) the signal with eq etc. Now this is fine, but.... what you get are the non-linearities of a given snapshot with some post-processing on it. This can sound acceptable, but its not actually correct. Those Neural DSP guys have some deep, gnarly and secret approaches/algos for doing the sim correctly... But if you uncover what those approaches are then feel free to post them here :beaming_face_with_smiling_eyes:

        HISE Development for hire.
        www.channelrobot.com

        Christoph HartC C 2 Replies Last reply Reply Quote 0
        • Christoph HartC
          Christoph Hart @Lindon
          last edited by

          @Lindon The audio signal does not have to be the only inputs into the network - you can train it with additional parameters as input and if the training is successful, it will mimic the parameters in the 1-dimensional space that you gave it.

          1 Reply Last reply Reply Quote 0
          • C
            ccbl @Lindon
            last edited by

            @Lindon I'm well aware of how NAM works. It is possible to make a parameterized model. But the thing is with the right DSP surrounding it you don't really need to and you drastically cut down the number of input output pairs you need to make. For instance, you can split the pre-amp and poweramp and just do digital EQ in between given that the EQ tends to be after the non-linear gain, and the EQ itself behaves linearly. But those eq inputs feed into the power amp model.

            A lot of the time for what you want to do, a single snapshot is actually fine, just varying the input gain alters the amount of saturation, say in a tube mic pre or something like that. So given NAM is the best out there right now it would make it a really useful module I think in HISE.

            Christoph HartC 1 Reply Last reply Reply Quote 1
            • Christoph HartC
              Christoph Hart @ccbl
              last edited by

              @ccbl said in Simple ML neural network:

              A lot of the time for what you want to do, a single snapshot is actually fine, just varying the input gain alters the amount of saturation, say in a tube mic pre or something like that. So given NAM is the best out there right now it would make it a really useful module I think in HISE.

              What does that offer that cannot be achieved with the current neural network inference framework in HISE (RT Neural)?

              C 1 Reply Last reply Reply Quote 0
              • C
                ccbl @Christoph Hart
                last edited by ccbl

                @Christoph-Hart A few things I guess. The number of NAM captures currently dwarfs all the other that are supported by RTNeural (I have used guitar ML in the past for instance) and it's only growing (here for example https://tonehunt.org/). Not just 1000s of amp snap shots but people are really getting into studio gear captures too. There's a huge group of people for support in generating good captures and technical training support. And of course it is probably the best sounding in terms of accuracy right now.

                I'm interested enough in using a neural net that I'm willing to use RTNeural, it's still a great system. NAM is becoming a defacto standard in a lot of NN capture spaces currently though. So for the future it seems like a good addition to the code base. And on a personal level I've already created over 1000 NAM models.

                Maybe once I've learned more of the basics, if someone is willing to help me with it I would appreciate it.

                Christoph HartC A 2 Replies Last reply Reply Quote 1
                • Christoph HartC
                  Christoph Hart @ccbl
                  last edited by

                  @ccbl And can't you just convert the models to work in RTNeural? In the end it's just running maths and I'm not super interested in adding the same thing but in 5 variations.

                  LindonL C 2 Replies Last reply Reply Quote 0
                  • LindonL
                    Lindon @Christoph Hart
                    last edited by Lindon

                    @Christoph-Hart but....

                    This would require the developer(s) to convert each from the NAM model, and if HISE loaded/played NAM models natively - then the end user could load any model they wanted...so 1,000s of models....instantly available.

                    HISE Development for hire.
                    www.channelrobot.com

                    Christoph HartC 1 Reply Last reply Reply Quote 0
                    • A
                      aaronventure @ccbl
                      last edited by

                      @ccbl This looks great, the colab notebook for training is super simple as well (you reamp the test signal and upload it, then wait).

                      Someone already implemented it into a pedal and it's selling for €500.

                      1 Reply Last reply Reply Quote 0
                      • Christoph HartC
                        Christoph Hart @Lindon
                        last edited by

                        @Lindon it would rather be a converter built into HISE that takes the model files and create parameters for a RTNeural network.

                        A 1 Reply Last reply Reply Quote 0
                        • A
                          aaronventure @Christoph Hart
                          last edited by

                          @Christoph-Hart Doesn't that warrant a whole user-facing workflow for implementation?

                          Having a NAM loader directly lets the end user load .nam files as if they were IRs for the convolution module.

                          I mean the whole thing does feel like a ConvolutionPlus, because it can properly model nonlinearities but doesn't have parameter options.

                          Christoph HartC 1 Reply Last reply Reply Quote 2
                          • Christoph HartC
                            Christoph Hart @aaronventure
                            last edited by

                            @aaronventure the api will be the smallest issue here the question is rather whether I should add that entire framework or modify RTneural to load the NAM files. From a quick peek at the source code it‘s mostly there and there are just a few layer types missing.

                            A 1 Reply Last reply Reply Quote 2
                            • A
                              aaronventure @Christoph Hart
                              last edited by

                              @Christoph-Hart is performance the same?

                              O Christoph HartC 2 Replies Last reply Reply Quote 0
                              • O
                                Orvillain @aaronventure
                                last edited by

                                I'd love to implement Neural Amp Modeller into HISE.

                                What are the chances of that???

                                1 Reply Last reply Reply Quote 1
                                • Christoph HartC
                                  Christoph Hart @aaronventure
                                  last edited by

                                  @aaronventure no idea but I would guess that the performance is pretty much the same as they do the same calculations.

                                  1 Reply Last reply Reply Quote 0
                                  • C
                                    ccbl
                                    last edited by

                                    I think the only issue with conversion will be that most NAM models are wavenet which is not currently supported by RTNeural.

                                    NAM has been tuned specifically around this architecture which is one of the reasons it's currently considered the most realistic.

                                    Whether or not this poses a barrier to conversion, and if in conversion you will loose some of the realism that has been achieved I'm not sure. I think as stated real time performance will likely be the same. NAM currently operates with zero latency, which I think RTNeural also does from memory, it's really about CPU utilisation.

                                    1 Reply Last reply Reply Quote 0
                                    • A
                                      Allen
                                      last edited by

                                      I think maybe we just need an effective NAM-to-RTNeural converter.

                                      Seens like RTNeural already has everything to hundle wavenet (conv1d,ReLU,softmax), I remember Jatin talked about his plan of making examples for several different architectures including wavenet in Discord.

                                      Also check this thread
                                      https://forum.mod.audio/t/new-neural-lv2-plugin-from-aida-dsp-based-extensively-on-existing-neuralpi-reduced-to-the-bone/8047/142?page=8
                                      (sorry for posting link to other forum...)

                                      1 Reply Last reply Reply Quote 0
                                      • C
                                        ccbl @Christoph Hart
                                        last edited by

                                        @Christoph-Hart said in Simple ML neural network:

                                        @ccbl And can't you just convert the models to work in RTNeural? In the end it's just running maths and I'm not super interested in adding the same thing but in 5 variations.

                                        I understand why you feel that way. There's no point chasing your tail every time a new NN comes on the scene. What I would say though is that since NAM has been released, I've seen at least 4 different companies bring out their own NN capture tech. And not a single one of those has taken off, all combined they have less captures publicly available than one comprehensive NAM pack on ToneHunt .

                                        Multiple commercial companies have incorporated NAM including Melda, Amplifire, Two Notes, Audio Assualt, and Solemn Tones amongst others. AnalogueX now uses NAM to capture preamps and compressors. Past of Future also does this.

                                        Essentially NAM has become the industry standard format for Analogue gear capture using neural networks at this point. So personally I think it would be worth implementing the NAM core tech as there's a far higher chance people are going to want to use a NAM model than any of the other tech.

                                        Beggars can't be choosers though. I don't mean this to come across as a demand so much as trying to present a compelling case. I don't have the skills to implement this myself. I understand there's only some much time and will power a person has to dedicate to a project like this.

                                        So I'm highly grateful for any way to incorporate NAM into the signal processing using HISE. Whether that involves conversion or native processing.

                                        O 1 Reply Last reply Reply Quote 1
                                        • O
                                          Orvillain @ccbl
                                          last edited by

                                          @ccbl said in Simple ML neural network:

                                          @Christoph-Hart said in Simple ML neural network:

                                          @ccbl And can't you just convert the models to work in RTNeural? In the end it's just running maths and I'm not super interested in adding the same thing but in 5 variations.

                                          I understand why you feel that way. There's no point chasing your tail every time a new NN comes on the scene. What I would say though is that since NAM has been released, I've seen at least 4 different companies bring out their own NN capture tech. And not a single one of those has taken off, all combined they have less captures publicly available than one comprehensive NAM pack on ToneHunt .

                                          Multiple commercial companies have incorporated NAM including Melda, Amplifire, Two Notes, Audio Assualt, and Solemn Tones amongst others. AnalogueX now uses NAM to capture preamps and compressors. Past of Future also does this.

                                          Essentially NAM has become the industry standard format for Analogue gear capture using neural networks at this point. So personally I think it would be worth implementing the NAM core tech as there's a far higher chance people are going to want to use a NAM model than any of the other tech.

                                          Beggars can't be choosers though. I don't mean this to come across as a demand so much as trying to present a compelling case. I don't have the skills to implement this myself. I understand there's only some much time and will power a person has to dedicate to a project like this.

                                          So I'm highly grateful for any way to incorporate NAM into the signal processing using HISE. Whether that involves conversion or native processing.

                                          I have to agree. I have spent a lot of time comparing and contrasting the various amp capture platforms over the years, and NAM is clearly the most accurate and most realistic sounding.

                                          NAM uses the Eigen library for its core DSP playback module.

                                          @Christoph-Hart I would imagine you'd need to implement Eigen into HISE if you wanted to support NAM playback???

                                          1 Reply Last reply Reply Quote 0
                                          • C
                                            ccbl
                                            last edited by

                                            @Christoph-Hart Regarding the current RTNeural implementation.

                                            With the GuitarML's AutomatedGuitarModelling LSTM trainer allows you to create parameterised models (https://github.com/GuitarML/Automated-GuitarAmpModelling).

                                            With the NeuralNetwork module are you able to address these input parameters? I'm interested in a single input parameter and want to connect a knob to address this parameter in the model inference.

                                            Dan KorneffD 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post

                                            48

                                            Online

                                            1.7k

                                            Users

                                            11.7k

                                            Topics

                                            102.1k

                                            Posts