Dynamic EQ Yet?
-
@iamlamprey said in Dynamic EQ Yet?:
think ML might have some potential for innovation -
Absolutelly. I think this kind of things are the best marketing you can invest your time in
-
@iamlamprey said in Dynamic EQ Yet?:
tree-smack
I always dreamed with the idea of a ML algo that can learn articulations from real players. Also learning from styles.... Like if you can choose to play a trumpet into a Miles style
-
@hisefilo I'm sure it's possible, just need someone infinitely smarter than me to develop it
my ultimate plugin would be a 5mb vst that the end user opens, types "aggressive guitar" or "soft piano" and gets an ML-derived physical model with unlimited expressivity...
-
@iamlamprey I think you are smart enough. I i know nothing about ML. But I guess you can train a model to classify audios depending on spectal data? I mean a ppp C4 note from trumpet will always look similar, no matter the trumpet
-
@hisefilo I'm only smart enough to use someone else's model, writing my own would result in my head exploding at least 3 times
But I guess you can train a model to classify audios depending on spectal data? I mean a ppp C4 note from trumpet will always look similar, no matter the trumpet
I believe a guided model could be taught what a ppp trumpet C4 sounds like (and every other note), with a big enough model on the actual instrument, you could then teach that new model the disparities of different players... in theory so...
"now that we know what the instrument sounds like, here's it being played by different musicians"
-
@d-healey Yes, optimize your website and work hard on the SEO.
That is the cheapest marketing you can get and effective.
And yet us little guys seem to ignore it a lot of the time.This is what I am doing with my synth website these days , its a lot of boring work but worth doing.
-
@iamlamprey said in Dynamic EQ Yet?:
@hisefilo said in Dynamic EQ Yet?:
innovative product ideas.
I think ML might have some potential for innovation - I managed to turn a tree-smack into a taiko from a few minutes of training using Magenta
unfortunately it's mono and only 16khz so there's some work left to be done :P
before:
after:
Did you use NSynth? Machine learning is a really interesting subject. I’ve been diving into it as well in recent weeks…
-
@UrsBollhalder I used DDSP from Magenta:
GitHub - magenta/ddsp: DDSP: Differentiable Digital Signal Processing
DDSP: Differentiable Digital Signal Processing. Contribute to magenta/ddsp development by creating an account on GitHub.
GitHub (github.com)
Starting with training a model using the autoencoder:
Then bringing that into the timbre transfer demo
The resynthesis tools in DDSP were exactly what I was looking for in terms of realistic synthesis of sounds - it uses a combination of Additive Synthesis, Subtractive Noise and deconvolution and sounds pretty good
The main issues I came across were the samplerate limitation (16khz) and the fact that it only train as mono signals, I'm not well-versed in python so I couldn't figure out how to fix those two things...
-
@iamlamprey Yes. I've been going through those repos as well!
How much time did it take you to compute those algos? I am not sure why they use the reduced sample rate etc. Probably to make it more accessible for web audio use! Upping the quality will probably just influence to time it'll take to compute things! Amazing stuff!!! -
@UrsBollhalder It was pretty fast, I used about 10 mins of samples, which I left running overnight so about 12-14 hours total for a relatively accurate resynthesis.
I think the 16k samplerate is for the actual training, once you start going to 44.1 or something higher like 96k the training would take a lot longer...
Probably the same thing for mono vs stereo (half the channels = half the time)