Simple ML neural network
-
@iamlamprey said in Simple ML neural network:
Ok so first: thank you - this is very helpful, but of course it just leads to more questions - and Im slightly afraid this might turn into some sort of ML primer at your time expense, so - I will try and keep to a minimum...
@Lindon said in Simple ML neural network:
So I've spent some time with the research papers, the youTube Videos and the blogs. It would seem I might need a statetful LSTM model - but I could easily be wrong.
Yeh any recurrent network is fine for a guitar amp, Jatin prefers GRU's but all of the guitarML stuff is LSTM if i remember correctly, i think GRUs are a bit easier on the CPU
Ok so fine GRU or plain old LSTM - is the "kind of" model yes?
My understanding of how I(we) might approach this is very very poor, but I think there any number of ways to get to the json code that would be needed to "make it work", but wow do I have a bunch of questions.... here's some of them:
- is this even possible in HISE? (I mean the playback not the modelling or learning)
if the entire RTNeural framework is implemented, recurrent models should be supported
here's one for @Christoph-Hart then: is GRU or LTSM implemented in the HIse implementation currently?
- Can I "just" use (say) this stuff : https://www.youtube.com/watch?v=xkrqF0D8pfQn and take the .json output and load it into RTNeural and expect it to "sort of" work?
if he's just using regular old pytorch and building simple models, you should be able to load the state_dict and run the export script, then bring the JSON into HISE
Well of course I have no idea what he's doing - but interestingly what do you mean by "simple model"?
- If not what's the best way to get from a bunch of guitar recordings to a set of .json files that I can then load into math.neural?
this is where training a model comes in, you'd need to learn some basic Python and familiarize yourself with packages like Torch or Tensorflow, as well as some audio-handling ones like Librosa
OK well unlike Christoph I once upon a time did a fair bit of Python work _ I really like it as a language...
the short version is:
- load and preprocess/augment the audio data
So here I think (correct me if Im wrong) this means ; playing about 5 mins of guitar thru the amp - and capturing the output as well as capturing the DI-ed guitar signal - then making sure these are edited nicely to phase align with each other. Is that about it?
- convert it into a dataset that can be read by a model
making it a pickle data set? Or perhaps NumPy
- create the model & train it on the data
- ha here we get to the bit Im most fuzzy on...-- create the model? So is this just set up a big set of neural net nodes which I can build with something like TensorFlow and Keras yes? no? maybe?
As an aside there seems to be a lot of parms I can set up for this - I guess more research..
- export the model's state_dict using the RTNeural export script
I get a .json file yes?
- load it into a
Math.Neural
node
Which I load into here...
- I assume all these Learned models are in fact static snapshots of dynamic processes, so every time the user changes the distortion control on the plugin I will need to load another model - if so how practical is that?
recurrent models aren't really "static", they can only estimate a single function at a given time, but that function can change depending on the "state" of the network, it basically has a short memory (literally in the name of LSTMs) so the function it's estimating is dependant on that memory
that being said, for a guitar amp the typical process is to train several models on various settings of the amp, then crossfade between them
Ok so I need to load up these .json "models" dynamically as the user fiddles with the UI controls...
-
@Lindon said in Simple ML neural network:
playing about 5 mins of guitar thru the amp - and capturing the output as well as capturing the DI-ed guitar signal
Is the NN result noticeably better than using an IR derived from the same recordings?
-
@Lindon said in Simple ML neural network:
Ok so fine GRU or plain old LSTM - is the "kind of" model yes?
for a bit more nuance than a fully connected network for something like analog modelling, yep.
Well of course I have no idea what he's doing - but interestingly what do you mean by "simple model"?
by "simple model" i mean one that only has to estimate a function. other types like generative models (VAEs, GANs), or ones that make use of embedding layers might be outside the scope of the current implementation
So here I think (correct me if Im wrong) this means ; playing about 5 mins of guitar thru the amp - and capturing the output as well as capturing the DI-ed guitar signal - then making sure these are edited nicely to phase align with each other. Is that about it?
pretty much, you'd then cut those 5 minutes up into small pieces so you dont have
5 x 60 x 44100
samples being fed into the model at oncemaking it a pickle data set? Or perhaps NumPy
torch & tensorflow both have built-in tools for converting raw data into a readable format for the model, including batching tools which speed up training on a GPU
ha here we get to the bit Im most fuzzy on...-- create the model? So is this just set up a big set of neural net nodes which I can build with something like TensorFlow and Keras yes? no? maybe?
yep, you basically tell keras/tensorflow/torch which layers, activations (non-linear functions) and input/output shapes you want to ensure it can spit out the data you're expecting it to
sidenote: tensor shape errors are one of the worst parts of machine learning, prepare to spend a decent amount of time looking at these
I get a .json file yes?
yep, the export script in the repo spits out a json file to load into HISE
Which I load into here...
sorry, this was my mistake. you don't directly load the JSON into the Neural Node, you load it with
const nn = Engine.createNeuralNetwork("networkname"); nn.loadPytorchModel(myJSONobject);
then when you add a Math.neural node, "nn" should appear in the dropdown
Ok so I need to load up these .json "models" dynamically as the user fiddles with the UI controls...
i don't think hot-swapping the actual networks on the fly would be good, rather use multiple lightweight networks and fade between them with a Gain node or something
hope this helps, it's a big subject
-
@d-healey said in Simple ML neural network:
@Lindon said in Simple ML neural network:
playing about 5 mins of guitar thru the amp - and capturing the output as well as capturing the DI-ed guitar signal
Is the NN result noticeably better than using an IR derived from the same recordings?
very good question..... anyone got an opinion? Im guessing the NN acts "more dynamically" compared to the IR....
-
@Lindon id say they can be about as accurate as a decent whitebox model, 8.5-9.0/10
-
@iamlamprey In that case I'd go with the IR which takes a few seconds to make - I have a script that can process 1000s of files to create loads of IRs in one go.
@iamlamprey said in Simple ML neural network:
rather use multiple lightweight networks and fade between them with a Gain node or something
Can do the same with multiple IRs
-
@iamlamprey said in Simple ML neural network:
@Lindon id say they can be about as accurate as a decent whitebox model, 8.5-9.0/10
so to get this clear in my head - the ML version is no better at the dynamic processing of guitar input that a set of IRs is?
..and by "dynamic" I mean different volume levels of input as the instrument is played...quietly (for those AOR ballads) or loudly...(thrashing it)
-
@Lindon it really depends on the implementation of the model and the data, having a robust model trained on a lot of varying data (different amp settings, different players etc) will sound great, and probably beat IRs and blackbox models
that being said, a dedicated whitebox model recreating each component of the signal path is still probably the best method for physical accuracy, i don't think a test comparing a proper neural net vs a proper physical model has ever been done, for obvious reasons ($$$)
there's still a lot of progress to be made with neural audio, so who knows how things will look in a year or two
-
@iamlamprey said in Simple ML neural network:
@Lindon it really depends on the implementation of the model and the data, having a robust model trained on a lot of varying data (different amp settings, different players etc) will sound great, and probably beat IRs and blackbox models
that being said, a dedicated whitebox model recreating each component of the signal path is still probably the best method for physical accuracy, i don't think a test comparing a proper neural net vs a proper physical model has ever been done, for obvious reasons ($$$)
there's still a lot of progress to be made with neural audio, so who knows how things will look in a year or two
Yes you are probably right, I was however thinking the amount of effort to build a neural net based amp sim would possibly greater than that to build a "nebula like" , "dynamic" IR model...
-
@iamlamprey - so does a "well formed/well trained" neural net model perform with the non-linearities of a real amp or not? I guess I'm back at "is ML even worth it right now?" as a question.
-
@Lindon here's an example of an effective implementation of NN:
https://mcomunita.github.io/gcn-tfilm_page/
Christian's repos are also public (or were last time i checked), they're more complex models compared to a regular LSTM, often utilizing multi-resolution STFT loss & other advanced things like dilated convolutions
I believe RTNeural has convolution dilation already implemented, so something like this is possible, albeit probably difficult
-
@iamlamprey said in Simple ML neural network:
@Lindon here's an example of an effective implementation of NN:
https://mcomunita.github.io/gcn-tfilm_page/
Christian's repos are also public (or were last time i checked), they're more complex models compared to a regular LSTM, often utilizing multi-resolution STFT loss & other advanced things like dilated convolutions
I believe RTNeural has convolution dilation already implemented, so something like this is possible, albeit probably difficult
Thanks -listening to the examples was interesting. Its clear the LSTM approach has some weaknesses at low gain inputs...but the approach outlined in the paper seems to deal with these much better...but as you say its probably considerably more difficult to achieve -especially as I have no idea what STFT is or dilated convolutions or how to build a model using them.
Hmm, this is getting more complex the more I look at it.
-
@Lindon said in Simple ML neural network:
-especially as I have no idea what STFT is
Short-Time Fourier Transform, basically a series of FFT's across the length of the signal to see how the frequency response changes over time, "multi-resolution" STFT means to create multiple STFT's of the same signal using different settings, which helps solve the frequency-time resolution issue
or dilated convolutions
if i'm remembering correctly, dilated convolutions basically refers to a layer in the network "skipping" a certain number of values and relying on other layers to process them, this means the resolution for that single layer might be lower, and therefore less computationally intensive which means you can have a larger receptive field (how far back in time the model can look)
Green: No dilation
Red: Skips 1
Blue: Skips 2or how to build a model using them
both Torch & Tensorflow have built-in dilation parameters for layers:
tf.keras.layers.Conv1D(dilation_rate=1)
torch.nn.Conv1d(dilation=1))
Hmm, this is getting more complex the more I look at it.
Yep, the cool part of ML is how automated it is, the uncool part is the massive amount of prerequisite work needed to be able to create robust models that do all the work for you
if you're wondering if I think you should do ML or old school blackbox modelling for your specific project, I'd lean towards the latter, especially if you already have experience in the field, even hiring someone to consult on an ML project would be tricky because the field is very new and there's still a lot of problems to be solved
-
@iamlamprey yep I am beginning to see the magnitude of the problem space,
This is an interesting interview;
https://www.youtube.com/watch?v=WLTzbEKTxhk (starts at min 12)
For those who dont know Neural DSP are the new kids on the block and I guess are the reason for all the fuss about neural net based guitar amp sims.
I looked up how they were funded, they got E5.9 Million in 3 Round of funding... and a lot of them arrived at work day 1 with PhDs in this field, and it still took them from 2018 to 2022 to get anything out the door...so to be in their league -= a lot of money + a lot of research+ a lot of time.
-
@Lindon Let me know if you want my IR script :D
-
@d-healey said in Simple ML neural network:
@Lindon Let me know if you want my IR script :D
My that made me laugh out loud.....
-
Sorry to hijack this thread potentially. I wonder if it would be worth implementing https://github.com/sdatkinson/NeuralAmpModelerCore
NAM has been gathering a lot of steam recently, the training process for new models is really easy to do and IMO it produces the best sounding models of gear to date, at least gear without time constants. It would be a great way to produce amp sims, or even just add a really accurate post sound processing option in synths and virtual instruments. Imagine an E-piano or organ etc with a great tube amp drive processor.
-
@ccbl So when I was looking at using ML to do Amp Sims I looked at the implementation you point at.
The important thing to remember when looking at this stuff is to realise that nearly every implementation(including the one you point at) are "snapshots" of an amp in a given state (controls set to a given position), yes they tend to do the non-linearities better than say an IR would, but what they dont do is allow you to meaningfully change settings on the amp sim (pre-amp, treble boost, mid, bass, etc. etc.) and get back the non-linearities associated with that combination of control settings.
What they do is post-process(or pre-process) the signal with eq etc. Now this is fine, but.... what you get are the non-linearities of a given snapshot with some post-processing on it. This can sound acceptable, but its not actually correct. Those Neural DSP guys have some deep, gnarly and secret approaches/algos for doing the sim correctly... But if you uncover what those approaches are then feel free to post them here :beaming_face_with_smiling_eyes:
-
@Lindon The audio signal does not have to be the only inputs into the network - you can train it with additional parameters as input and if the training is successful, it will mimic the parameters in the 1-dimensional space that you gave it.
-
@Lindon I'm well aware of how NAM works. It is possible to make a parameterized model. But the thing is with the right DSP surrounding it you don't really need to and you drastically cut down the number of input output pairs you need to make. For instance, you can split the pre-amp and poweramp and just do digital EQ in between given that the EQ tends to be after the non-linear gain, and the EQ itself behaves linearly. But those eq inputs feed into the power amp model.
A lot of the time for what you want to do, a single snapshot is actually fine, just varying the input gain alters the amount of saturation, say in a tube mic pre or something like that. So given NAM is the best out there right now it would make it a really useful module I think in HISE.