HISE Logo Forum
    • Categories
    • Register
    • Login
    1. HISE
    2. ccbl
    3. Best
    C
    • Profile
    • Following 0
    • Followers 0
    • Topics 9
    • Posts 90
    • Groups 0

    Posts

    Recent Best Controversial
    • Things I've Learned about ML Audio Modelling

      In trying to learn how to use HISE to create analogue modelled FX plugins, to start with Amp Sims, but later preamps, and compressors etc, I've done some preliminary work.

      TlDr; LSTM has a large amount of training variance, therefore you should run the training recursively until you get a good model. With recent code developments by Jatin Chowdry, and Mike Oliphant, I think it is worth integrating solutions that allow NAM models, which have clear advantages in certain scenarios, while LSTM might be preferred in others.

      Given that currently HISE implements RT-Neural which can load KERAS models, I started there. The easiest to use platform for generating models was the AIDA-X pipeline [https://github.com/AidaDSP/Automated-GuitarAmpModelling/tree/aidadsp_devel] (utilising docker, through talking to them on Discord I was advised that unlike what is currently documented in their GitHub you should use the aidadsp/pytorch:next branch not, aidadsp/pytorch:latest).

      I found the process of training LSTM models a little frustrating compared to my previous experience with NAM. The training process is very erratic with big jumps in error rate and extremely variable end point ESR even using the exact same input output pair and model setting. I should mention though that I was attempting to train a high gain 5150 type amp here, one of the harder things to model.

      To try and figure out the best model parameters I used chatGPT to create a script that would repeat the training process but randomising a number of key training parameters. I ended up doing ~600 runs. I then performed a Random Forest Regression analysis on the results of that training with best ESR as the outcome metric. I used the suggested parameters for a further 100 runs of the same input output audio pair keeping the parameters constant.

      LSTM training results of randomised parameters
      lstm randam parameters.png

      LSTM training results with Random Forest reccomend parameters
      lstm rf suggested.png

      Comparison of average and IQR ESR of Random Parameters vs RF Suggested Parameters
      mean_iqr_plot.png

      The random forest suggested parameters did result in a lower average ESR, however it did not much reduce the amount of variation with roughly an equivalent IQR.

      This is the exact same input output audio pair trained with Neural Amp Modeler (NAM). I only did 10 runs here, nor did I bother computing the IQR, because the extreme reduction in variability was pretty evident. NAM also obtained a better final ESR with a comparable model size.
      wavenet std model.png

      Something else I discovered during my initial tests with the AIDA pipeline was that there was some of the clean signal blended into my model. This is due to the fact that in the config files a flag known as skip_con was set to "1" by default. Setting this to "0" removes the clean signal from the model. With talking to the folks from AIDA that flag is to help more accurately model things like TubeScreamers, and Klon pedals which do have some clean blend in their designs, but obviously this isn't useful for high gain amp designs, or say things like a Neve Preamp which would not have such a clean blend. skip_con will be set to 0 by default in updates to the AIDA pipeline.

      There have been two recent developments, Jatin Chowdhury who is the original author of RT-Neural has created a fork of RT-Neural that can read NAM weights (https://github.com/jatinchowdhury18/RTNeural-NAM). However this fork only has the NAM functionality and can not read the KERAS models.

      What might be of more interest is Mike Oliphants "NeuralAudio" (https://github.com/mikeoliphant/NeuralAudio). Mike's code is capable of reading NAM files using NAM core, as well as NAM AND KERAS models using the RT-Neural implementation. In my opinion this would be the optimal solution to incorporate into HISE.

      The reason for this is flexibility. There are situations where NAM is clearly the best choice when it comes to Neural Models, especially in high saturation situations like guitar amps. However, I think LSTM has advantages in situations where time domain information is more important, or indeed on less complex signals like say component/circuit modelling where LSTM is a little lighter on CPU. Being able to use both of these approaches would open up a lot of opportunities for processing in HISE. Not just in terms of audio fx but in terms of processing instruments too.

      I will publish all this info and my scripts on GitHub soon for anyone that might want to use them. The script for AIDA will be useful for people who wish to do multiple runs of training until they get a satisfactory ESR (which is my opinion is anything less than 0.02 for high gain guitar amps).

      posted in Blog Entries
      C
      ccbl
    • RE: 8 Times more CPU consumption on Aida-X Neural Models

      I can confirm the same findings anecdotally. I haven't stuck it up on a performance tester, but I have to raise my buffer size in a HISE implementation where I could run multiple models at 32 samples in AIDA-X.

      Here's a bunch of models for you to test, there's a couple different sizes here.
      keras-models.zip

      posted in General Questions
      C
      ccbl
    • RE: 3rd party Header Files in Hise?

      @griffinboy some guides on this kind of stuff would be amazing. I'd love to have a crack at some of this too. When you have the time it would be awesome.

      posted in ScriptNode
      C
      ccbl
    • RE: How to make a guitar tuner

      @JulesV GVST GTUNE is still incredible. I'm assuming OP is asking though because they want to build a tuner into part of a larger plugin, not because they think there's a gap in the market for a standalone.

      TBH if we come up with a good solution, I'd love to use that code too.

      posted in General Questions
      C
      ccbl
    • RE: Help me understand Waveshaping

      @clevername27 I appreciate your sentiment, but I'm just a guy with a normal day job with an interest in plugins. Not looking to make any money here, just releasing my work for free as and when inspiration strikes.

      posted in General Questions
      C
      ccbl
    • RE: Help me understand Waveshaping

      @griffinboy ha ha yeah. To be honest I wasn't expecting to make some UAD level simulation but just learn the basics of HISE a bit more.

      I'm already very familiar with AI though through making a lot of NAM profiles and being one of the co-founders of ToneHunt.

      Really looking forward to implementing quite a few ideas once that process is ironed out.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @griffinboy Here's the link to their discord channel for RTNeural

      https://discord.gg/enmpURqR

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @resonant that's awesome. Would love to pick their brains and see if we can get it up and running for the rest of us.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @ccbl for instance, how I plan to use HISE is to create plugins where I use a NN to model various non-linear components such as transformers, tubes, fet preamps etc, and then use the regular DSP in between. I'm just a hobbiest who plans to release everything FOSS though, so I'll have to wait and see what you much more clever folks come up with.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      I realise we already hashed this discussion out, and people might be sick of it. But IMO the NAM trainer has a really intuitive GUI trainer which allows for different sized networks, at various sample rates. It also has a very defined output model format, which seems to be a sticking point with RTNeural.

      Given the existence of the core C++ library https://github.com/sdatkinson/NeuralAmpModelerCore

      Might it be easier to implement this instead, given many people want to use ML Networks for non-linnear processing for the most part?

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @Christoph-Hart so for now I decided to try and make a very simple audio processor using scriptnode. I followed the tutorial project adding the weights for the math.neural module to pickup, and it does see the object, however when I select it HISE crashes, complete CTD.

      It's an LSTM network generated by a RTNeural based project (guitarML automatedguitarampmodelling pipline).

      Not sure what the issue is.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @Christoph-Hart said in Simple ML neural network:

      @ccbl And can't you just convert the models to work in RTNeural? In the end it's just running maths and I'm not super interested in adding the same thing but in 5 variations.

      I understand why you feel that way. There's no point chasing your tail every time a new NN comes on the scene. What I would say though is that since NAM has been released, I've seen at least 4 different companies bring out their own NN capture tech. And not a single one of those has taken off, all combined they have less captures publicly available than one comprehensive NAM pack on ToneHunt .

      Multiple commercial companies have incorporated NAM including Melda, Amplifire, Two Notes, Audio Assualt, and Solemn Tones amongst others. AnalogueX now uses NAM to capture preamps and compressors. Past of Future also does this.

      Essentially NAM has become the industry standard format for Analogue gear capture using neural networks at this point. So personally I think it would be worth implementing the NAM core tech as there's a far higher chance people are going to want to use a NAM model than any of the other tech.

      Beggars can't be choosers though. I don't mean this to come across as a demand so much as trying to present a compelling case. I don't have the skills to implement this myself. I understand there's only some much time and will power a person has to dedicate to a project like this.

      So I'm highly grateful for any way to incorporate NAM into the signal processing using HISE. Whether that involves conversion or native processing.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @Christoph-Hart A few things I guess. The number of NAM captures currently dwarfs all the other that are supported by RTNeural (I have used guitar ML in the past for instance) and it's only growing (here for example https://tonehunt.org/). Not just 1000s of amp snap shots but people are really getting into studio gear captures too. There's a huge group of people for support in generating good captures and technical training support. And of course it is probably the best sounding in terms of accuracy right now.

      I'm interested enough in using a neural net that I'm willing to use RTNeural, it's still a great system. NAM is becoming a defacto standard in a lot of NN capture spaces currently though. So for the future it seems like a good addition to the code base. And on a personal level I've already created over 1000 NAM models.

      Maybe once I've learned more of the basics, if someone is willing to help me with it I would appreciate it.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      @Lindon I'm well aware of how NAM works. It is possible to make a parameterized model. But the thing is with the right DSP surrounding it you don't really need to and you drastically cut down the number of input output pairs you need to make. For instance, you can split the pre-amp and poweramp and just do digital EQ in between given that the EQ tends to be after the non-linear gain, and the EQ itself behaves linearly. But those eq inputs feed into the power amp model.

      A lot of the time for what you want to do, a single snapshot is actually fine, just varying the input gain alters the amount of saturation, say in a tube mic pre or something like that. So given NAM is the best out there right now it would make it a really useful module I think in HISE.

      posted in General Questions
      C
      ccbl
    • RE: Simple ML neural network

      Sorry to hijack this thread potentially. I wonder if it would be worth implementing https://github.com/sdatkinson/NeuralAmpModelerCore

      NAM has been gathering a lot of steam recently, the training process for new models is really easy to do and IMO it produces the best sounding models of gear to date, at least gear without time constants. It would be a great way to produce amp sims, or even just add a really accurate post sound processing option in synths and virtual instruments. Imagine an E-piano or organ etc with a great tube amp drive processor.

      posted in General Questions
      C
      ccbl
    • RE: 8 Times more CPU consumption on Aida-X Neural Models

      @Christoph-Hart said in 8 Times more CPU consumption on Aida-X Neural Models:

      @Dan-Korneff alright then it stays, the people have spoken :)

      Just out of curiosity: who is using this in a real project and does it work there? Like except for the NAM loader?

      To be honest: I‘m a bit „disincentivized“ in fixing the NAM stuff because I don‘t see a use case except for loading in existing models from that one website with potential infringement of other peoples work, but maybe my understanding of the possible use cases here is wrong.

      @Christoph-Hart I'm only interesting in using models of things I built and trained myself. I plan on using this as part of a hybrid approach where I model certain physical circuits with NAM and then use DSP for other aspects. Not just grabbing stuff of ToneZone and putting a plugin wrapper around a single profile.

      As @Dan-Korneff said, NAM has a much more solid and predictable training pipeline, not to mention an easy to use, free, fast, cloud based platform for the training. That's what makes it desirable. There really are a lot of options to make really cool stuff.

      posted in General Questions
      C
      ccbl
    • RE: Hardcoded Neural Network does not work as expected

      @ustk so basically every NN model uses a number of weights. This results in a certain number of parameters being able to be tweaked by the AI essentially. The more parameters the more CPU is required for processing.

      With simpler circuits, or even say individual components, you could probably use very small models. I am actually planning to use the same approach as you, using NNs just for the non linnear stuff. I'm hoping to get it all working at higher sample rates though to reduce aliasing which would stack up quickly if you are chaining NNs together.

      posted in Bug Reports
      C
      ccbl
    • RE: Things I've Learned about ML Audio Modelling

      @hisefilo I actually compiled one just yesterday that can switch between 1 of 3 models. Works in Reaper.

      posted in Blog Entries
      C
      ccbl