HISE Logo Forum
    • Categories
    • Register
    • Login

    Simple ML neural network

    Scheduled Pinned Locked Moved General Questions
    134 Posts 18 Posters 11.1k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • hisefiloH
      hisefilo
      last edited by hisefilo

      Hi mates. I converted this simple neural network example from this page (https://medium.com/technology-invention-and-more/how-to-build-a-simple-neural-network-in-9-lines-of-python-code-cc8f23647ca1) to HISE Script and is working. It's useless of course, just playing around.

      HiseSnippet 1187.3ocsV01aaaCDVJIpnwaCXEXerefnexNwIQJscaXoYqKuUXrkDi4r1AXXDvHQayTYRAJpj3Uz+y6ev1cjRVxujr.iMihFw6M9bO7HuqsRFxRSkJG20uXbByw8K85LVnGd3PJW3z5HG2m6kLVOTJtLMTo4IWlPSoQR5kC4oLmCFCKSYQNttq9NzC20Wyw76u9oCnwTQHqTjiy6k7P1uxGw0kRa+1egGGeBMhcAeTEqe0aaEJEGJikY.5V0y2IgF9Q5.1YTzrU7bbexwQbsT0QS0rTG20NPFMtyP4sBq8ummxuJlgKBb5.AxJ9DYbDhXTpygC4wQsKXgTGHJsK4jUsbx23cJOhOQdI270FEjROpxGtqLM7VcJ3ETEd9Uf2BfjaEHslEROyqSnhmnK0f34K7ZIzLUeZ3zGMVacV4Geh2N6P5vGLRxiH8yDgZtTTq3CRpUS86ZT6S0HvOESmoDj.xNj5AjMImR0C2lcWR8stqQi8p84Z0f3cjTSRTxnrPMQ1mnuURnJEcbZYfijHRQKpCpBZhFrawlbCUAaTZVrlrOweOiLy+0WpH0QsbiB3OuAcLX6XlXfd3djM2j2vXoEqlPr49Fa5x6Q1vrMvWUhYdFYMtHCZI3ZNMl+mLxsL9fg5JPmOQ2Grppqj2l1jDJiSqlA4NBHsau+kb.CPEv+oIoPdLPrWFlIgZpvcsMbWCgCQBFtqK4hoCV2qw3sKvGlyOEUDIGUuAYKRfcK97b7StyEDz6hkWQiIwraXwX1xoPkKIhEFSUTjlRqgfRqfaFbwfNLcKQRlkM552j.+KnWSRW3jOn7yBo94RgDdlnbdltHL4FAdkaV5XAMQyC+vDhe9ipWBd.0oXJbQdTM9xgaIVbCt8ZeenpqV4IUgx7SrIKeSE+LmeEqJpCVzg8bbx7UukE.FeQq.+mywIExEVJMrCXZw81JWyLQo4rjTW+d381pAgoT.p2eArNVFtU9l7HKEmY6pjpyTbNKtttG.M7tqA2XA6F4HaihzbC7InB7znnv0VedAKUWaB4cgDWiEMEEY6Q.irDa2vgTw.FQCMvHZIj2igWuXQby08d0l50n6iXs6v8Qu0NDJPjwrsSTbgt9Kfk84QLnY3OPdA7JpM7M1ifnxR1.bX4xKe5QJNSpYmKpi0WqCoJYVU86uPcvFpUx3XlZgpwFspGxw5hrQWwTMgBj3L1DCgtOS2Ry69aoUsiKzGWyD5JFJE3StmmvD2WeXmCsNAe86sNhpoXevbYfcILXXDDBtGwtAFpv1UbcuiXoeTKSfwJlqkIzrFNAio5o6fiisjq.3foZahMSEob83pi0Laa80dbs0e3oNdrv8Yds45vgKFuqr.7Br1+23MeHoux6398Yg5Rvtl2I+wCNQz+MP42fWDf2sfVaJ9cPM4YYi5.yLFx.jHDrXrN0cErhxt1GWiLSGlHxr3uge4JCv0t4JCJT5LhFpjWFZucfij8TiD.SByznq6cJtlD3Xtw.944usuyHX5vKCCQpXK.6K1mcWBed4R3yqVBed8R3y2tD97cKgOe+C5CNX9OmokirWS.AsO17Lhq6wBb5ESEoy+.rOosNK
      

      Any plans to add Tensorflow.js Magenta or any other ML library to HISE @Christoph-Hart ?

      d.healeyD 1 Reply Last reply Reply Quote 0
      • d.healeyD
        d.healey @hisefilo
        last edited by

        @hisefilo said in Simple ML neural network:

        Any plans to add Tensorflow.js Magenta or any other ML library to HISE

        For what purpose? Are you thinking of something like code generation?

        Libre Wave - Freedom respecting instruments and effects
        My Patreon - HISE tutorials
        YouTube Channel - Public HISE tutorials

        hisefiloH 1 Reply Last reply Reply Quote 0
        • hisefiloH
          hisefilo @d.healey
          last edited by

          @d-healey nope. I was thinking into replicating any of these demos within HISE

          Link Preview Image
          Demos

          A list of apps powered by Magenta models including fun toys, creative applications, research notebooks, and professional-grade tools that will benefit a wide...

          favicon

          Magenta (magenta.tensorflow.org)

          ? 1 Reply Last reply Reply Quote 0
          • ?
            A Former User @hisefilo
            last edited by

            @hisefilo you can already import RTNeural using a 3rd party Node, the only limitation is you can't import weights from the exported JSON (have to write them into a span manually)

            hisefiloH 1 Reply Last reply Reply Quote 0
            • hisefiloH
              hisefilo @A Former User
              last edited by

              @iamlamprey can you tell me more?? First steps? 3rd party node?? Totally new stuff for me :anguished_face:

              d.healeyD ? 2 Replies Last reply Reply Quote 0
              • d.healeyD
                d.healey @hisefilo
                last edited by

                @hisefilo https://docs.hise.audio/scriptnode/manual/third_party.html

                Libre Wave - Freedom respecting instruments and effects
                My Patreon - HISE tutorials
                YouTube Channel - Public HISE tutorials

                1 Reply Last reply Reply Quote 0
                • ?
                  A Former User @hisefilo
                  last edited by A Former User

                  @hisefilo https://github.com/jatinchowdhury18/RTNeural

                  here's the old node I was using, im not sure if it even works since i haven't touched it in a year but it has the basic pipeline implemented, i've added some comments to help explain what's going on , I recommend starting with a simple LSTM like the Guitar Amp from this:

                  Just a moment...

                  favicon

                  (towardsdatascience.com)

                  the Node is called "tensorflow Node" or something but it's not using tensorflow at all

                  // =========================| Third Party Node Template |=========================
                  
                  #pragma once
                  #include <JuceHeader.h>
                  #include <RTNeural.h>
                  #include "src/model_weights.h"
                  
                  //RTNEURAL_DEFAULT_ALIGNMENT = 16
                  //RTNEURAL_USE_EIGEN = 1
                  
                  namespace project
                  {
                  	using namespace juce;
                  	using namespace hise;
                  	using namespace scriptnode;
                  
                  	// =================| The node class with all required callbacks |=================
                  
                  	template <int NV> struct tensorflow_node : public data::base
                  	{
                  		// Metadata Definitions ------------------------------------------------------
                  
                  		SNEX_NODE(tensorflow_node);
                  
                  		struct MetadataClass
                  		{
                  			SN_NODE_ID("tensorflow_node");
                  		};
                  
                  		// set to true if you want this node to have a modulation dragger
                  		static constexpr bool isModNode() { return false; };
                  		static constexpr bool isPolyphonic() { return NV > 1; };
                  		// set to true if your node produces a tail
                  		static constexpr bool hasTail() { return false; };
                  		// Undefine this method if you want a dynamic channel count
                  		static constexpr int getFixChannelAmount() { return 2; };
                  
                  		// Define the amount and types of external data slots you want to use
                  		static constexpr int NumTables = 0;
                  		static constexpr int NumSliderPacks = 0;
                  		static constexpr int NumAudioFiles = 0;
                  		static constexpr int NumFilters = 0;
                  		static constexpr int NumDisplayBuffers = 0;
                  
                  		//RTNeural Model
                  
                  		float rtNeuralInput[1] = { 0.0 }; // Need a single-sample array to pass to the model as a "Tensor"
                  		int modelType = 0; // can't use Strings
                  		float predictedNextSample = 0.0; // initialize our "predicted" sample
                  
                  		// let's create a few Models
                  		// input layer goes first
                  		// then "hidden" layers (LSTMLayerT, GRULayerT, DenseT, Conv1DT)
                  		// syntax is LayerType<float, inSize, outSize>
                  		// make sure the sizes match between connected layers
                  		// finally output layer (usually a Dense layer with a single output)
                  
                  		//LSTM Example
                  		// LSTMs build a learned state from previous samples to predict the next one
                  		RTNeural::ModelT<float, 1, 1,
                  			RTNeural::LSTMLayerT<float, 1, 20>,
                  			RTNeural::DenseT<float, 20, 1>
                  		> modelLSTM;
                  
                  		//GRU Example
                  		// grus do a similar thing to LSTMs, Jatin prefers them
                  		RTNeural::ModelT<double, 1, 1,
                  			RTNeural::GRULayerT<double, 1, 20>,
                  			RTNeural::DenseT<double, 20, 1>
                  		> modelGRU;
                  
                  		//TCN Example
                  		// TCN's are great all-rounders, with dilated convolutions they can generate a large receptive field
                  		// the syntax for Conv layers is <double, inSize, outSize, kernel, dilation>
                  		RTNeural::ModelT<double, 1, 1,
                  			RTNeural::Conv1DT<double, 1, 1, 16, 1>,
                  			RTNeural::Conv1DT<double, 1, 1, 16, 2>,
                  			RTNeural::Conv1DT<double, 1, 1, 16, 4>,
                  			RTNeural::Conv1DT<double, 1, 1, 16, 8>,
                  			RTNeural::DenseT<double, 1, 1>
                  		> modelTCN;
                  
                  		//Weights		
                  		
                  		void set_weights()
                  		{	
                  			// the weights are imported from the src/model_weights.h file in the #includes				
                  			// we use Model.get<index>() to get the layer 
                  
                  			//TCN Example
                  			modelTCN.get<0>().setWeights(conv1Weights);
                  			modelTCN.get<1>().setWeights(conv2Weights);
                  			modelTCN.get<2>().setWeights(conv3Weights);
                  			modelTCN.get<3>().setWeights(conv4Weights);								
                  			modelTCN.get<0>().setBias(conv1Bias);
                  			modelTCN.get<1>().setBias(conv2Bias);
                  			modelTCN.get<2>().setBias(conv3Bias);
                  			modelTCN.get<3>().setBias(conv4Bias);			
                  			modelTCN.get<4>().setWeights(denseWeights);
                  
                  			//LSTM Example
                  
                  			modelLSTM.get<0>().setWVals(LSTMWVals);
                  			modelLSTM.get<0>().setUVals(LSTMUVals);
                  			modelLSTM.get<0>().setBVals(LSTMBVals);
                  			modelLSTM.get<1>().setWeights(LSTMDenseWeight);
                  			modelLSTM.get<1>().setBias(LSTMDenseBias);
                  		}
                  
                  		// Scriptnode Callbacks ------------------------------------------------------
                  
                  		void prepare(PrepareSpecs specs)
                  		{
                  			// reset the model here
                  			model.reset(); 
                  			modelLSTM.reset();
                  			modelGRU.reset();
                  			set_weights();	// instantiate weights
                  		}
                  
                  		void handleHiseEvent(HiseEvent& e) {}
                  
                  		void reset() 
                  		{
                  			// reset everything again
                  			modelLSTM.reset();
                  			predictedNextSample = 0.0;
                  		}
                  
                  		template <typename T> void process(T& data)
                  		{
                  			static constexpr int NumChannels = getFixChannelAmount();
                  			// Cast the dynamic channel data to a fixed channel amount
                  			auto& fixData = data.template as<ProcessData<NumChannels>>();
                  			int numSamples = data.getNumSamples();	
                  
                  			for (auto ch : data) // for each channel
                  			{
                  				dyn<float> channel = data.toChannelData(ch);
                  
                  				for (auto& sample : channel) //For each sample
                  				{
                  					if (std::isnan(sample)) //Check if NaN
                  					{
                  						// this checks if we don't already have a learned state (we wont when hitting Play)
                  						sample = predictedNextSample; //Use previous prediction as input if NaN
                  					}
                  					//Now call new prediction...
                  					//the model has an "internal state" or memory
                  					//which it learns based on previous inputs
                  
                  					sample = modelLSTM.forward(&sample);
                  				}
                  			}
                  
                  			// Create a FrameProcessor object
                  			auto fd = fixData.toFrameData();			
                  
                  			while (fd.next())
                  			{
                  				// Forward to frame processing
                  				processFrame(fd.toSpan());
                  			}
                  		}
                  
                  		template <typename T> void processFrame(T& data){}
                  
                  		int handleModulation(double& value)
                  		{
                  			return 0;
                  		}
                  
                  		void setExternalData(const ExternalData& data, int index) {}
                  
                  		// Parameter Functions -------------------------------------------------------
                  
                  		template <int P> void setParameter(double v){}
                  
                  		void createParameters(ParameterDataList& data){}
                  	};
                  }
                  

                  the weights looksomething like this (prepare your eyes):

                  std::vector<std::vector<std::vector<double>>> conv1Weights = {
                  	{
                  					{
                  						{
                  							-0.014952817000448704,
                  							0.05129363760352135,
                  							-0.10707060992717743,
                  							-0.011642195284366608,
                  							0.05948321148753166,
                  							0.11834733933210373,
                  							-0.03627893701195717,
                  							0.08669514209032059,
                  							0.022034088149666786,
                  							-0.03608795627951622,
                  							0.04046545550227165,
                  							0.03425503522157669,
                  							-0.0263582244515419,
                  							-0.06742122769355774,
                  							-0.13450580835342407,
                  							0.12268577516078949
                  						}
                  					},
                  					{
                  						{
                  							-0.0675336942076683,
                  							0.03803471103310585,
                  							0.06178430840373039,
                  							0.029312260448932648,
                  							-0.04388665407896042,
                  							0.05684272199869156,
                  							-0.057864125818014145,
                  							-0.11442451924085617,
                  							-0.005465891677886248,
                  							0.045379918068647385,
                  							-0.10136598348617554,
                  							-0.013837937265634537,
                  							-0.01725628226995468,
                  							-0.02865828201174736,
                  							-0.009762179106473923,
                  							-0.06300396472215652
                  						}
                  					}, // continues for many pages
                  
                  hisefiloH 1 Reply Last reply Reply Quote 0
                  • hisefiloH
                    hisefilo @A Former User
                    last edited by

                    @iamlamprey @d-healey thanks mates!!!! I guess now I have plans for the weekend!!!!!

                    Christoph HartC 1 Reply Last reply Reply Quote 0
                    • Christoph HartC
                      Christoph Hart @hisefilo
                      last edited by

                      @hisefilo Just FYI, I've started working on adding a neural network API to HISE. It will take a while so don't expect immediate results, but my plans are the following:

                      1. add RTNeural as a "network player" that can run trained models. This will be available in the compiled plugin as well as in HISE. The use cases for this will be TBD (and I'm open to suggestions for possible applications), but I can imagine there will be both a scripting API for processing input data as well as a scriptnode node that will run the network on either the audio signal or cable level (a bit like cable_expr and core.expr). The advantage of this library is that it's fairly lightweight plus it has an emphasis on realtime performance (although there will be use cases like preset creation, etc).
                      2. Add a binding to one of the big ML libraries (the current favorites are either PyTorch or mlpack) to HISE in order to create and train neural networks. I know that the "industry standard" answer to this task is "use Python", however the integration of these libraries into HISE will yield a few advantages, plus I need to learn the API anyways so this will be a drive-by effect of the integration.

                      With the availability of training models within HISE we'll get these advantages over having to resort to Python:

                      • we need to convert the model data to be loaded into RTNeural anyways
                      • we can create training data from within HISE using a script that feeds into the training process
                      • no need to learn Python and it's weird whitespace syntax lol. If the tools are available without having to setup the entire Python ML toolchain, it might be used for mor simple tasks too.

                      I think in order to make this as non-bullshitty as possible (I have no interest in saying "HISE can now do AI too!"), we should talk about which use cases for neural networks occur in the development of HISE projects and then talk about the requirements.

                      From the top of my head there are a few applications:

                      • sound classification. Drop in a sample, the network categorizes it in whatever you want and processes this information => train a network with samples, then run it with user supplied samples to get the information out of it.
                      • preset creation. Drop in a sample, the network will analyse it and create preset data (a bit like this homie is doing it. => train a network with spectrograms of randomly created presets, then run the inverted process when the user drops a sample. A very powerful boost for this functionality could be the Loris library, I can imagine that having an array of highly precise time-varying gain values associated to the harmonic index is a much better input data than these few pixels from a spectrogram
                      • use a neural network to perform audio calculations (from amp sim and other stuff to changing the instrument like the ddsp plugins do). I'm not very picky about distortion and amp simulation (a simple tanh does the job for me lol), but apparently that's one of the prime applications of RTNeural.

                      The scope of the planned neural network support is currently limited to anything that boils down to "float numbers in, float numbers out" and I'm not deep enough in the ML rabbithole yet to decide how big of a step it would be to add language support (so you can finally make SynthGPT with HISE), plus my current intuition is that there's a rather limited and hype-focused use case for NLP in regards to audio plugins.

                      Dan KorneffD ? hisefiloH 4 Replies Last reply Reply Quote 8
                      • Dan KorneffD
                        Dan Korneff @Christoph Hart
                        last edited by

                        @Christoph-Hart This is pretty awesome news. I agree that RTNeural looks like a good choice to add as a player. I wouldn't be completely bummed if we had to train models outside of HISE, but the integration of PyTorch would be much welcomed to streamline the workflow.
                        I would most likely be training models of basic things (like guitar amps) but also component level models.

                        Dan Korneff - Producer / Mixer / Audio Nerd

                        1 Reply Last reply Reply Quote 2
                        • ?
                          A Former User
                          last edited by

                          i'll just leave this one here, the most impressive use-case i've seen so far 🙂

                          1 Reply Last reply Reply Quote 1
                          • ?
                            A Former User @Christoph Hart
                            last edited by

                            @Christoph-Hart said in Simple ML neural network:

                            no need to learn Python and it's weird whitespace syntax lol

                            i like Python 😂

                            1 Reply Last reply Reply Quote 0
                            • hisefiloH
                              hisefilo @Christoph Hart
                              last edited by

                              @Christoph-Hart WOW!!! Nice surprise,
                              I can think of ML HISE playing the same role it did for non JUCE C++ experts in dsp developing arena. DDSP or RAVE also will be nice to have onboard.
                              We all are waiting for your news!!

                              Christoph HartC 1 Reply Last reply Reply Quote 0
                              • Christoph HartC
                                Christoph Hart @hisefilo
                                last edited by

                                Alright maybe I‘ll hold off with the HISE network training part and dust off my Python skillset, the pipeline is just to advanced to not use it for model building.

                                DDSP and stuff is nice but I think I need to add „conventional“ neural networks first, then we can expand on that (basically fast forwarding the last 30 years of development in this area lol).

                                ? 1 Reply Last reply Reply Quote 2
                                • ?
                                  A Former User @Christoph Hart
                                  last edited by

                                  @Christoph-Hart Jatin (the guy who made RTNeural) is super friendly and knowledgeable, i'm sure he'd be happy to answer any questions you have about recurrent models and such if you haven't touched base with him already

                                  i don't think training inside HISE would be particularly useful since it would be limited to the CPU anyway right?

                                  Christoph HartC 1 Reply Last reply Reply Quote 0
                                  • Christoph HartC
                                    Christoph Hart @A Former User
                                    last edited by

                                    @iamlamprey No libtorch has GPU support. But I just discovered TorchStudio, that's precisely the kind of GUI wrapper I needed to avoid the frustrating Python first steps stage...

                                    Christoph HartC 1 Reply Last reply Reply Quote 2
                                    • Christoph HartC
                                      Christoph Hart @Christoph Hart
                                      last edited by Christoph Hart

                                      @Christoph-Hart Alright, the first experimental integration is pushed. You can now load neural networks using RTNeural and either inference it using the scripting API or run it as realtime effect using the math.neural node. I've created an example project with some hello world stuff and a roundtrip tutorial for getting started with TorchStudio:

                                      Link Preview Image
                                      hise_tutorial/NeuralNetworkExample at master · christophhart/hise_tutorial

                                      The Tutorial project for HISE. Contribute to christophhart/hise_tutorial development by creating an account on GitHub.

                                      favicon

                                      GitHub (github.com)

                                      This is far from being production ready but it should be good enough for playing around and let me know what features might be interesting to add.

                                      Dan KorneffD Matt_SFM orangeO 3 Replies Last reply Reply Quote 8
                                      • Dan KorneffD
                                        Dan Korneff @Christoph Hart
                                        last edited by

                                        @Christoph-Hart oh shit! There goes my weekend plans. R.I.P my marriage 😄😄

                                        Dan Korneff - Producer / Mixer / Audio Nerd

                                        1 Reply Last reply Reply Quote 5
                                        • Matt_SFM
                                          Matt_SF @Christoph Hart
                                          last edited by

                                          @Christoph-Hart Awesome. My knowledge of Neural Networks is close to 0 but this is a good opportunity to learn something new. Thank you genius!

                                          Develop branch
                                          Win10 & VS17 / Ventura & Xcode 14. 3

                                          1 Reply Last reply Reply Quote 2
                                          • ?
                                            A Former User
                                            last edited by

                                            checking this out now, very exciting stuff. I also appreciate you skipping over MNIST in the roundtrip example 😉

                                            1 Reply Last reply Reply Quote 1
                                            • First post
                                              Last post

                                            56

                                            Online

                                            1.7k

                                            Users

                                            11.7k

                                            Topics

                                            102.3k

                                            Posts