@d-healey Ohh ok. If you have a youtube video tutorial about it I'll track that down then.
Posts made by ccbl
-
RE: Consistent Size and High Resolution on Any Kind of Screen?
-
RE: Consistent Size and High Resolution on Any Kind of Screen?
@d-healey how does one do that? Do you provide a series of button perhaps? But there's code that scales the UI?
-
Soft Bypass CPU Usage
Pretty simple question I guess, when using SoftBypass, do the blocks contained within stop using CPU power?
For instance, if I used on of the bypass blocks that can switch between multiple signal chains, are the unused chain using CPU power?
-
RE: Hardcoded Neural Network does not work as expected
@Dan-Korneff I followed all the steps in your Gitlab to get the Aida-X trainer up and running, but when I get to the actual training part, it reads all the configs and starts the training process but then fails with
"RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
I tried with both 24bit and 32bit float input files.
-
RE: Hardcoded Neural Network does not work as expected
@Dan-Korneff are there any flags I need to enable when compiling HISE to get the RT_Neural stuff compiling into a plugin properly?
-
RE: Hardcoded Neural Network does not work as expected
@ustk so basically every NN model uses a number of weights. This results in a certain number of parameters being able to be tweaked by the AI essentially. The more parameters the more CPU is required for processing.
With simpler circuits, or even say individual components, you could probably use very small models. I am actually planning to use the same approach as you, using NNs just for the non linnear stuff. I'm hoping to get it all working at higher sample rates though to reduce aliasing which would stack up quickly if you are chaining NNs together.
-
RE: Apple has finally included 16GB of RAM in the base model! :)
@bendurso If all you want is the base model then yeah, I think this is a great step up and an awesome deal. As usual though with Apple products, if you want anything above the base. Yikes, hope you're willing to pay almost 5x market prices for RAM and NAND upgrades.
-
RE: Hardcoded Neural Network does not work as expected
@Dan-Korneff I'm not sure if the tolerance refers to an absolute loss/ESR value, or whether it actually refers to a threshold of progress between Epochs.
Also, depending on the size of the model, there's basically a hard limit to how close the model can be to the original. The Rat is quite harmonically rich, so it will be on the harder end to model.
-
RE: Help me understand Waveshaping
@clevername27 I appreciate your sentiment, but I'm just a guy with a normal day job with an interest in plugins. Not looking to make any money here, just releasing my work for free as and when inspiration strikes.
-
RE: Help me understand Waveshaping
@griffinboy ha ha yeah. To be honest I wasn't expecting to make some UAD level simulation but just learn the basics of HISE a bit more.
I'm already very familiar with AI though through making a lot of NAM profiles and being one of the co-founders of ToneHunt.
Really looking forward to implementing quite a few ideas once that process is ironed out.
-
RE: Help me understand Waveshaping
@griffinboy Second question I guess. Is this all affected by the sample rate. -1/1 at 24bit has a much tighter ranger than -1/1 at 32bit float right?
-
RE: Help me understand Waveshaping
@griffinboy thanks for the explanation! I will probably look to the RT-Neural implementation once it seems like there's a known pipeline.
For now I just want to learn some stuff. Maybe I'll try out different ways of modulating the waveshape based on some kind of analysis. RMS maybe?
-
Help me understand Waveshaping
Hey folks. I thought I would take a crack at making an amp sim in HISE using the available DSP blocks. Would love some knowledgeable people to help fill in a bit of knowledge that's lacking in my area.
When it comes to the DSP aspect, is waveshaping relative or absolute? What I mean by that is, does it analyse each wave and then apply the shaping to it as a sort of percentage? Or, is it absolute, so for instance if the peak of the wave doesn't meet the threshold, the wave would remain linnear? I hope I'm explaining myself properly.
When it comes to the Shape FX, is there supposed to be a graphical representation of what's happening, or should I get an oscilloscope module to check what's happening to the wave?
Is clipping actually the better approach here? I guess the issue is, I think there's only hard clipping available as a module, I couldn't set up for instance asymetrical soft clipping?
-
RE: Simple ML neural network
@griffinboy Here's the link to their discord channel for RTNeural
-
RE: Simple ML neural network
@Dan-Korneff What sample rate were you using? I'd love to train at 96khz or even 192khz for aliasing reduction reasons. My ultimate plan is to stack several smaller models of individual components together sandwiched between regular DSP so I think reducing aliasing should be important in this case.
-
RE: Simple ML neural network
I asked Jatin to have a quick look through the thread to see if he could see any issues, he just had this to say.
"Hmmm, it seems to me that the model JSON file that is being loaded into the "Neural Node" is structured as a TensorFlow-style JSON file, but it's being loaded with the HISE's
loadPytorchModel()
method? I don't really know what the Neural Node does internally, so idk how much I can help beyond that." -
RE: Simple ML neural network
@resonant that's awesome. Would love to pick their brains and see if we can get it up and running for the rest of us.