HISE Logo Forum
    • Categories
    • Register
    • Login

    Simple ML neural network

    Scheduled Pinned Locked Moved General Questions
    134 Posts 18 Posters 11.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      ccbl @resonant
      last edited by

      @resonant that's awesome. Would love to pick their brains and see if we can get it up and running for the rest of us.

      1 Reply Last reply Reply Quote 1
      • Dan KorneffD
        Dan Korneff
        last edited by

        Here's where I'm at with the process:
        https://gitlab.korneff.co/publicgroup/hise-neuralnetworktrainingscripts

        I have used the scripts from https://github.com/AidaDSP/Automated-GuitarAmpModelling/tree/aidadsp_devel as a starting point.

        This will allow you to create a dataset from your input/output audio file, train the model from the dataset, and then convert the model to Keras so you can use it RTNeural.

        @Christoph-Hart The final model is making HISE crash. I thought I was doing something wrong because some of the values for "shape" are null, but I've downloaded other files created with the source script and they are null in the same places.

        Here's one for example:
        JMP Low Input.json

        It's possible that the script is not formatting the json properly, but I don't know what a correct model looks like to compare to.

        Dan Korneff - Producer / Mixer / Audio Nerd

        Christoph HartC 1 Reply Last reply Reply Quote 0
        • Christoph HartC
          Christoph Hart @Dan Korneff
          last edited by

          @Dan-Korneff That's the JSON from the sine generator example:

          {
            "layers": "SineModel(\n  (network): Sequential(\n    (0): Linear(in_features=1, out_features=8, bias=True)\n    (1): Tanh()\n    (2): Linear(in_features=8, out_features=4, bias=True)\n    (3): Tanh()\n    (4): Linear(in_features=4, out_features=1, bias=True)\n  )\n)",
            "weights": {
              "network.0.weight": [
                [
                  1.046385407447815
                ],
                [
                  1.417808413505554
                ],
                [
                  0.9530450105667114
                ],
                [
                  1.118412375450134
                ],
                [
                  -2.003693819046021
                ],
                [
                  1.485351920127869
                ],
                [
                  -1.323277235031128
                ],
                [
                  -1.482439756393433
                ]
              ],
              "network.0.bias": [
                -0.4485535621643066,
                -1.284180760383606,
                1.995141625404358,
                -1.036547422409058,
                0.2926304638385773,
                0.4770179986953735,
                0.3244697153568268,
                0.4108103811740875
              ],
              "network.2.weight": [
                [
                  -1.791297316551208,
                  -0.3762974143028259,
                  -0.3934035897254944,
                  0.1596113294363022,
                  0.5510663390159607,
                  -1.115586280822754,
                  0.678738534450531,
                  1.327430963516235
                ],
                [
                  0.3413433432579041,
                  1.86607301235199,
                  -0.217528447508812,
                  2.568317174911499,
                  0.3797312676906586,
                  -0.1846907883882523,
                  0.04422684758901596,
                  -0.0883311927318573
                ],
                [
                  0.3113365173339844,
                  0.8516308069229126,
                  -0.6042391061782837,
                  0.9669480919837952,
                  -1.354665994644165,
                  0.1234097927808762,
                  -1.171357274055481,
                  -0.9616029858589172
                ],
                [
                  -0.5073869824409485,
                  -0.7385743856430054,
                  0.3118444979190826,
                  -0.9642266035079956,
                  1.899434208869934,
                  -0.1497718989849091,
                  1.684132099151611,
                  0.895214855670929
                ]
              ],
              "network.2.bias": [
                -0.6971003413200378,
                0.3228396475315094,
                -0.6209602355957031,
                0.1816271394491196
              ],
              "network.4.weight": [
                [
                  -0.9233435988426208,
                  1.108147859573364,
                  -0.8966623544692993,
                  0.394584596157074
                ]
              ],
              "network.4.bias": [
                0.06727132201194763
              ]
            }
          }
          

          So apparently it doesn't resolve the python code for defining the layer composition but uses a single string that is parsed. That's the output of a custom python script I wrote and run on a model built with TorchStudio, but if your model is "the standard" way, I'll make sure that it loads correctly too as these things look like syntactic sugar to me.

          Christoph HartC 1 Reply Last reply Reply Quote 0
          • Christoph HartC
            Christoph Hart @Christoph Hart
            last edited by Christoph Hart

            Here's the link to the tutorial again:

            Link Preview Image
            hise_tutorial/NeuralNetworkExample/Scripts/python at master · christophhart/hise_tutorial

            The Tutorial project for HISE. Contribute to christophhart/hise_tutorial development by creating an account on GitHub.

            favicon

            GitHub (github.com)

            But I realized your example looks more or less like the Tensorflow model in this directory. Which method are you using for loading the model?

            Dan KorneffD 1 Reply Last reply Reply Quote 0
            • LindonL
              Lindon @Dan Korneff
              last edited by

              @Dan-Korneff said in Simple ML neural network:

              @resonant Invite them to the conversation

              @resonant
              yeah, maybe - they asked me to get re-involved with them on some ML stuff as they were a bit stuck.....it didnt go anywhere, so they may well still be stuck or they found someone else to do the coding for them....your call

              HISE Development for hire.
              www.channelrobot.com

              1 Reply Last reply Reply Quote 1
              • Dan KorneffD
                Dan Korneff @Christoph Hart
                last edited by

                @Christoph-Hart The structure does look like TensorFlow, but the script is using torch libraries to create the model:

                torch
                torch.optim
                torch.cuda
                torch.optim.Adam
                torch.optim
                

                I was using this example code to load the model:

                const var pt = Engine.createNeuralNetwork("PytorchNetwork");;
                
                // Load the model layout & weights that were exported as JSON
                const var modelJSON = pythonRoot.getChildFile("model_keras.json").loadAsObject();
                
                // Load the model & weights:
                pt.loadPytorchModel(modelJSON);
                

                when I load the model into the Neural node, HISE checks out.
                spongebob-meme-1.jpg

                I've run through the MNIST dataset tutorial a couple times to get a basic idea of how TorchStudio works, but I'm not sure how to adapt the scripts to work there.

                Dan Korneff - Producer / Mixer / Audio Nerd

                1 Reply Last reply Reply Quote 0
                • C
                  ccbl
                  last edited by

                  I asked Jatin to have a quick look through the thread to see if he could see any issues, he just had this to say.

                  "Hmmm, it seems to me that the model JSON file that is being loaded into the "Neural Node" is structured as a TensorFlow-style JSON file, but it's being loaded with the HISE's loadPytorchModel() method? I don't really know what the Neural Node does internally, so idk how much I can help beyond that."

                  griffinboyG 1 Reply Last reply Reply Quote 0
                  • Dan KorneffD
                    Dan Korneff
                    last edited by

                    Using loadTensorFlowModel() was indeed the solution. I'll try to make some tutorials on training and loading models this weekend.

                    Dan Korneff - Producer / Mixer / Audio Nerd

                    LindonL A C orangeO 4 Replies Last reply Reply Quote 4
                    • LindonL
                      Lindon @Dan Korneff
                      last edited by

                      @Dan-Korneff that would be very cool. Do these models include the ability to define parameters(like tone controls on an amp)? Or are they static snap shots?

                      HISE Development for hire.
                      www.channelrobot.com

                      Dan KorneffD 1 Reply Last reply Reply Quote 0
                      • Dan KorneffD
                        Dan Korneff @Lindon
                        last edited by

                        @Lindon said in Simple ML neural network:

                        @Dan-Korneff that would be very cool. Do these models include the ability to define parameters(like tone controls on an amp)? Or are they static snap shots?

                        The training scripts can create parameterized models, but I've only tested a static model so far.
                        Once I get my sea legs I'll pull in @Christoph-Hart to figure out multiple parameters.

                        Dan Korneff - Producer / Mixer / Audio Nerd

                        1 Reply Last reply Reply Quote 1
                        • A
                          aaronventure @Dan Korneff
                          last edited by

                          @Dan-Korneff man this sounds great, thanks.

                          how are you finding the results so far comparing the capture and the model output?

                          Dan KorneffD 1 Reply Last reply Reply Quote 0
                          • Dan KorneffD
                            Dan Korneff @aaronventure
                            last edited by

                            @aaronventure I haven't gotten that far yet. Still in the "does this even work" stage 😀

                            Dan Korneff - Producer / Mixer / Audio Nerd

                            1 Reply Last reply Reply Quote 2
                            • griffinboyG
                              griffinboy @ccbl
                              last edited by

                              @ccbl

                              Chowdhury? I'd love to talk to him! His dsp is very inspiring

                              C 1 Reply Last reply Reply Quote 0
                              • S
                                scottmire
                                last edited by

                                Apologies if this has already been discussed....but I would think this would be low hanging fruit:

                                Link Preview Image
                                GitHub - Tr3m/nam-juce: A JUCE implementation of the Neural Amp Modeler Plugin

                                A JUCE implementation of the Neural Amp Modeler Plugin - Tr3m/nam-juce

                                favicon

                                GitHub (github.com)

                                A 1 Reply Last reply Reply Quote 1
                                • A
                                  aaronventure @scottmire
                                  last edited by

                                  @scottmire keep in mind that this is GPLv3 licensed

                                  S 1 Reply Last reply Reply Quote 0
                                  • C
                                    ccbl @Dan Korneff
                                    last edited by

                                    @Dan-Korneff What sample rate were you using? I'd love to train at 96khz or even 192khz for aliasing reduction reasons. My ultimate plan is to stack several smaller models of individual components together sandwiched between regular DSP so I think reducing aliasing should be important in this case.

                                    Dan KorneffD 1 Reply Last reply Reply Quote 0
                                    • C
                                      ccbl @griffinboy
                                      last edited by

                                      @griffinboy Here's the link to their discord channel for RTNeural

                                      https://discord.gg/enmpURqR

                                      1 Reply Last reply Reply Quote 1
                                      • Dan KorneffD
                                        Dan Korneff @ccbl
                                        last edited by

                                        @ccbl I'm only experimenting with 48K at the moment. Feel free to look at my repository scripts to figure out the sample rate stuff. I'm taking baby steps with this stuff while I finish up other projects.

                                        Dan Korneff - Producer / Mixer / Audio Nerd

                                        1 Reply Last reply Reply Quote 1
                                        • S
                                          scottmire @aaronventure
                                          last edited by

                                          @aaronventure NAM itself is MIT licensed and this is simply a JUCE implementation of the NAM Player....so, I have no idea how they could enforce a GPLv3 license. But...I'm definitely no expert.

                                          d.healeyD 1 Reply Last reply Reply Quote 0
                                          • d.healeyD
                                            d.healey @scottmire
                                            last edited by

                                            @scottmire MIT is a weak license so you can take MIT code and relicense it pretty much however you like. If you are releasing a GPL project then all code in that project needs to be GPL. So the developer of nam-JUCE has relicensed NAM as GPL within their project.

                                            Libre Wave - Freedom respecting instruments and effects
                                            My Patreon - HISE tutorials
                                            YouTube Channel - Public HISE tutorials

                                            S 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post

                                            32

                                            Online

                                            1.7k

                                            Users

                                            11.7k

                                            Topics

                                            102.3k

                                            Posts