HISE Logo Forum
    • Categories
    • Register
    • Login
    1. HISE
    2. ccbl
    C
    • Profile
    • Following 0
    • Followers 0
    • Topics 9
    • Posts 90
    • Groups 0

    ccbl

    @ccbl

    25
    Reputation
    16
    Profile views
    90
    Posts
    0
    Followers
    0
    Following
    Joined
    Last Online

    ccbl Unfollow Follow

    Best posts made by ccbl

    • Things I've Learned about ML Audio Modelling

      In trying to learn how to use HISE to create analogue modelled FX plugins, to start with Amp Sims, but later preamps, and compressors etc, I've done some preliminary work.

      TlDr; LSTM has a large amount of training variance, therefore you should run the training recursively until you get a good model. With recent code developments by Jatin Chowdry, and Mike Oliphant, I think it is worth integrating solutions that allow NAM models, which have clear advantages in certain scenarios, while LSTM might be preferred in others.

      Given that currently HISE implements RT-Neural which can load KERAS models, I started there. The easiest to use platform for generating models was the AIDA-X pipeline [https://github.com/AidaDSP/Automated-GuitarAmpModelling/tree/aidadsp_devel] (utilising docker, through talking to them on Discord I was advised that unlike what is currently documented in their GitHub you should use the aidadsp/pytorch:next branch not, aidadsp/pytorch:latest).

      I found the process of training LSTM models a little frustrating compared to my previous experience with NAM. The training process is very erratic with big jumps in error rate and extremely variable end point ESR even using the exact same input output pair and model setting. I should mention though that I was attempting to train a high gain 5150 type amp here, one of the harder things to model.

      To try and figure out the best model parameters I used chatGPT to create a script that would repeat the training process but randomising a number of key training parameters. I ended up doing ~600 runs. I then performed a Random Forest Regression analysis on the results of that training with best ESR as the outcome metric. I used the suggested parameters for a further 100 runs of the same input output audio pair keeping the parameters constant.

      LSTM training results of randomised parameters
      lstm randam parameters.png

      LSTM training results with Random Forest reccomend parameters
      lstm rf suggested.png

      Comparison of average and IQR ESR of Random Parameters vs RF Suggested Parameters
      mean_iqr_plot.png

      The random forest suggested parameters did result in a lower average ESR, however it did not much reduce the amount of variation with roughly an equivalent IQR.

      This is the exact same input output audio pair trained with Neural Amp Modeler (NAM). I only did 10 runs here, nor did I bother computing the IQR, because the extreme reduction in variability was pretty evident. NAM also obtained a better final ESR with a comparable model size.
      wavenet std model.png

      Something else I discovered during my initial tests with the AIDA pipeline was that there was some of the clean signal blended into my model. This is due to the fact that in the config files a flag known as skip_con was set to "1" by default. Setting this to "0" removes the clean signal from the model. With talking to the folks from AIDA that flag is to help more accurately model things like TubeScreamers, and Klon pedals which do have some clean blend in their designs, but obviously this isn't useful for high gain amp designs, or say things like a Neve Preamp which would not have such a clean blend. skip_con will be set to 0 by default in updates to the AIDA pipeline.

      There have been two recent developments, Jatin Chowdhury who is the original author of RT-Neural has created a fork of RT-Neural that can read NAM weights (https://github.com/jatinchowdhury18/RTNeural-NAM). However this fork only has the NAM functionality and can not read the KERAS models.

      What might be of more interest is Mike Oliphants "NeuralAudio" (https://github.com/mikeoliphant/NeuralAudio). Mike's code is capable of reading NAM files using NAM core, as well as NAM AND KERAS models using the RT-Neural implementation. In my opinion this would be the optimal solution to incorporate into HISE.

      The reason for this is flexibility. There are situations where NAM is clearly the best choice when it comes to Neural Models, especially in high saturation situations like guitar amps. However, I think LSTM has advantages in situations where time domain information is more important, or indeed on less complex signals like say component/circuit modelling where LSTM is a little lighter on CPU. Being able to use both of these approaches would open up a lot of opportunities for processing in HISE. Not just in terms of audio fx but in terms of processing instruments too.

      I will publish all this info and my scripts on GitHub soon for anyone that might want to use them. The script for AIDA will be useful for people who wish to do multiple runs of training until they get a satisfactory ESR (which is my opinion is anything less than 0.02 for high gain guitar amps).

      posted in Blog Entries
      C
      ccbl
    • RE: 8 Times more CPU consumption on Aida-X Neural Models

      I can confirm the same findings anecdotally. I haven't stuck it up on a performance tester, but I have to raise my buffer size in a HISE implementation where I could run multiple models at 32 samples in AIDA-X.

      Here's a bunch of models for you to test, there's a couple different sizes here.
      keras-models.zip

      posted in General Questions
      C
      ccbl
    • RE: 3rd party Header Files in Hise?

      @griffinboy some guides on this kind of stuff would be amazing. I'd love to have a crack at some of this too. When you have the time it would be awesome.

      posted in ScriptNode
      C
      ccbl
    • RE: How to make a guitar tuner

      @JulesV GVST GTUNE is still incredible. I'm assuming OP is asking though because they want to build a tuner into part of a larger plugin, not because they think there's a gap in the market for a standalone.

      TBH if we come up with a good solution, I'd love to use that code too.

      posted in General Questions
      C
      ccbl
    • RE: Help me understand Waveshaping

      @clevername27 I appreciate your sentiment, but I'm just a guy with a normal day job with an interest in plugins. Not looking to make any money here, just releasing my work for free as and when inspiration strikes.

      posted in General Questions
      C
      ccbl
    • RE: Help me understand Waveshaping

      @griffinboy ha ha yeah. To be honest I wasn't expecting to make some UAD level simulation but just learn the basics of HISE a bit more.

      I'm already very familiar with AI though through making a lot of NAM profiles and being one of the co-founders of ToneHunt.

      Really looking forward to implementing quite a few ideas once that process is ironed out.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @griffinboy Here's the link to their discord channel for RTNeural

      https://discord.gg/enmpURqR

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @resonant that's awesome. Would love to pick their brains and see if we can get it up and running for the rest of us.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @ccbl for instance, how I plan to use HISE is to create plugins where I use a NN to model various non-linear components such as transformers, tubes, fet preamps etc, and then use the regular DSP in between. I'm just a hobbiest who plans to release everything FOSS though, so I'll have to wait and see what you much more clever folks come up with.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      I realise we already hashed this discussion out, and people might be sick of it. But IMO the NAM trainer has a really intuitive GUI trainer which allows for different sized networks, at various sample rates. It also has a very defined output model format, which seems to be a sticking point with RTNeural.

      Given the existence of the core C++ library https://github.com/sdatkinson/NeuralAmpModelerCore

      Might it be easier to implement this instead, given many people want to use ML Networks for non-linnear processing for the most part?

      posted in General Questions
      C
      ccbl

    Latest posts made by ccbl

    • RE: What are you using to build UI for your plugin? What's your preferred way and why?

      Honestly just knobman for filmstrips and photoshop. I know it's not the most efficient, but I'm comfortable with it and I'm happy enough with the results.

      posted in General Questions
      C
      ccbl
    • RE: 8 Times more CPU consumption on Aida-X Neural Models

      I also just want to say @Christoph-Hart in the face of the really shitty attitude that person above showed. I love HISE so much. I never thought I'd be able to create a simple plugin that just changes the gain let alone the stuff I can do with HISE as someone with almost zero coding experience. It's such an awesome thing you've created.

      posted in General Questions
      C
      ccbl
    • RE: 8 Times more CPU consumption on Aida-X Neural Models

      @Dan-Korneff said in 8 Times more CPU consumption on Aida-X Neural Models:

      @Christoph-Hart said in 8 Times more CPU consumption on Aida-X Neural Models:

      @Dan-Korneff alright then it stays, the people have spoken :)

      Just out of curiosity: who is using this in a real project and does it work there? Like except for the NAM loader?

      To be honest: I‘m a bit „disincentivized“ in fixing the NAM stuff because I don‘t see a use case except for loading in existing models from that one website with potential infringement of other peoples work, but maybe my understanding of the possible use cases here is wrong.

      Adding NAM support is super helpful for some people as the training scripts are solid and actively maintained. I'll be using RT-Neural on this end for the parameterized support. Still making a few tweaks to the training script I started with, but I’m planning to build around it soon.

      I would love to see an example of a parameterised model up and running. The way I plan on profiling won't require parameters for the most part, but it would be useful every now and then.

      posted in General Questions
      C
      ccbl
    • RE: 8 Times more CPU consumption on Aida-X Neural Models

      @Christoph-Hart said in 8 Times more CPU consumption on Aida-X Neural Models:

      @Dan-Korneff alright then it stays, the people have spoken :)

      Just out of curiosity: who is using this in a real project and does it work there? Like except for the NAM loader?

      To be honest: I‘m a bit „disincentivized“ in fixing the NAM stuff because I don‘t see a use case except for loading in existing models from that one website with potential infringement of other peoples work, but maybe my understanding of the possible use cases here is wrong.

      @Christoph-Hart I'm only interesting in using models of things I built and trained myself. I plan on using this as part of a hybrid approach where I model certain physical circuits with NAM and then use DSP for other aspects. Not just grabbing stuff of ToneZone and putting a plugin wrapper around a single profile.

      As @Dan-Korneff said, NAM has a much more solid and predictable training pipeline, not to mention an easy to use, free, fast, cloud based platform for the training. That's what makes it desirable. There really are a lot of options to make really cool stuff.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      Hey folks! I've been out of the loop for a while. Wondering if there has been any progress with the NAM integration?

      I actually got a functional plugin working with LSTM but the training procedure is very chaotic so I haven't moved ahead much with it. I will probably provide some updates on that in a different thread though for more discussion around the particulars because I think there are other efficiencies I think might need to be ironed out.

      posted in General Questions
      C
      ccbl
    • RE: Things I've Learned about ML Audio Modelling

      @scottmire Hi there. I think I cheated basically. I used a safe bypass module matrix that switches the network based on which buttons are pressed at the time. I'm not sure if this has performance implications, it certainly seems like the more networks are in the code base the worse the plugin runs. When I get more time I think a new thread is in order where we can all discuss optimisations and solutions.

      posted in Blog Entries
      C
      ccbl
    • RE: Let’s Build the “League of Newbies”

      Happy to be added to the chat.

      posted in General Questions
      C
      ccbl
    • RE: How to make a guitar tuner

      @JulesV GVST GTUNE is still incredible. I'm assuming OP is asking though because they want to build a tuner into part of a larger plugin, not because they think there's a gap in the market for a standalone.

      TBH if we come up with a good solution, I'd love to use that code too.

      posted in General Questions
      C
      ccbl
    • RE: 8 Times more CPU consumption on Aida-X Neural Models

      @Christoph-Hart One thing, in the Neural Example in the docs it says this "It requires HISE to be built with the RTNeural framework to enable real time inferencing..."

      I don't remember doing this explicitly when I built HISE, I just built it the standard way and it all works. I'm assuming this is just no longer a requirement? Otherwise could it explain the performance penalty?

      posted in General Questions
      C
      ccbl
    • RE: Plugin doesn't respect UI Zoom Factor onInit

      @Lindon said in Plugin doesn't respect UI Zoom Factor onInit:

      @ccbl said in Plugin doesn't respect UI Zoom Factor onInit:

      @oskarsh how would that look in context?

      inline function onFiftyControl(component, value)
      {
      	Settings.setZoomLevel(0.5);
      };
      Content.getComponent("Fifty").setControlCallback(onFiftyControl)
      

      Do you define the value as 1.0?

      I really don't know coding very well.

      so like this:

      inline function onFiftyControl(component, value)
      {
          if(value)
              Settings.setZoomLevel(0.5);
      };
      Content.getComponent("Fifty").setControlCallback(onFiftyControl)
      

      After flushing my %appdata% config this worked! Thanks!

      posted in Scripting
      C
      ccbl