HISE Logo Forum
    • Categories
    • Register
    • Login

    Hardcoded Neural Network does not work as expected

    Scheduled Pinned Locked Moved Bug Reports
    24 Posts 7 Posters 1.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      ccbl @Dan Korneff
      last edited by ccbl

      @Dan-Korneff I'm not sure if the tolerance refers to an absolute loss/ESR value, or whether it actually refers to a threshold of progress between Epochs.

      Also, depending on the size of the model, there's basically a hard limit to how close the model can be to the original. The Rat is quite harmonically rich, so it will be on the harder end to model.

      1 Reply Last reply Reply Quote 0
      • ustkU
        ustk @Dan Korneff
        last edited by ustk

        @Dan-Korneff Would it be CPU greedy to run several small models at the same time?
        I'm thinking about modelling key component of a circuit. Not for reconstructing it though, just for having those models here and there in a DSP...

        In fact my question is, since a single component/sub-circuit has an easier behaviour than a full circuit, are the resultant models lighter to run, at least if you run only one of those?

        Can't help pressing F5 in the forum...

        Dan KorneffD C 2 Replies Last reply Reply Quote 0
        • Dan KorneffD
          Dan Korneff @ustk
          last edited by

          @ustk I think it should be doable. My plan is to do a Grey Box approach. Use ML for some items, and then analog modeling for others.
          Test out the snippet above and see how much CPU it uses.
          I'm still in the "trying to make this work" phase so I haven't gotten into measuring or optimizing yet.

          Dan Korneff - Producer / Mixer / Audio Nerd

          C 1 Reply Last reply Reply Quote 1
          • C
            ccbl @ustk
            last edited by

            @ustk so basically every NN model uses a number of weights. This results in a certain number of parameters being able to be tweaked by the AI essentially. The more parameters the more CPU is required for processing.

            With simpler circuits, or even say individual components, you could probably use very small models. I am actually planning to use the same approach as you, using NNs just for the non linnear stuff. I'm hoping to get it all working at higher sample rates though to reduce aliasing which would stack up quickly if you are chaining NNs together.

            1 Reply Last reply Reply Quote 1
            • C
              ccbl @Dan Korneff
              last edited by

              @Dan-Korneff are there any flags I need to enable when compiling HISE to get the RT_Neural stuff compiling into a plugin properly?

              Dan KorneffD 1 Reply Last reply Reply Quote 0
              • Dan KorneffD
                Dan Korneff @ccbl
                last edited by

                @ccbl I didn't add any flags and it exported correctly.

                Dan Korneff - Producer / Mixer / Audio Nerd

                JulesVJ 1 Reply Last reply Reply Quote 1
                • JulesVJ
                  JulesV @Dan Korneff
                  last edited by

                  @Dan-Korneff Thanks for sharing this.

                  What is the method to train your models?

                  Dan KorneffD 1 Reply Last reply Reply Quote 0
                  • Dan KorneffD
                    Dan Korneff @JulesV
                    last edited by

                    @JulesV https://forum.hise.audio/post/86374

                    Dan Korneff - Producer / Mixer / Audio Nerd

                    C 1 Reply Last reply Reply Quote 1
                    • C
                      ccbl @Dan Korneff
                      last edited by

                      @Dan-Korneff I followed all the steps in your Gitlab to get the Aida-X trainer up and running, but when I get to the actual training part, it reads all the configs and starts the training process but then fails with

                      "RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
                      

                      I tried with both 24bit and 32bit float input files.

                      Dan KorneffD 1 Reply Last reply Reply Quote 0
                      • Dan KorneffD
                        Dan Korneff @ccbl
                        last edited by

                        @ccbl I'm still working on the scripts, so they aren't 100% yet. Feel free to dig around and see if you can trace the issue.

                        Dan Korneff - Producer / Mixer / Audio Nerd

                        C T 2 Replies Last reply Reply Quote 0
                        • C
                          ccbl @Dan Korneff
                          last edited by

                          This post is deleted!
                          1 Reply Last reply Reply Quote 0
                          • T
                            tomekslesicki @Dan Korneff
                            last edited by

                            @Dan-Korneff I'm also getting the

                             CUDNN_STATUS_NOT_SUPPORTED
                            

                            error when training with CUDA. With CUDA disabled, the traning goes as expected but is very slow (as expected ;-))

                            Are you using CUDA to train the model? If so, how are you setting the enviorment?

                            Like:

                            conda env config vars set CUBLAS_WORKSPACE_CONFIG=:16:8
                            conda activate base
                            

                            ...or some other way?

                            Thank you!

                            Dan KorneffD 1 Reply Last reply Reply Quote 0
                            • Dan KorneffD
                              Dan Korneff @tomekslesicki
                              last edited by

                              @tomekslesicki same is happening here. Looks like there's an incompatibility with the latest CUDA driver. I'll have to tweak that

                              Dan Korneff - Producer / Mixer / Audio Nerd

                              T 1 Reply Last reply Reply Quote 0
                              • T
                                tomekslesicki @Dan Korneff
                                last edited by

                                @Dan-Korneff the solution is to install CUDA Toolkit 11.8 and install pytorch 2.3.0 instead of the current version. Here's the install prompt:

                                conda install pytorch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 pytorch-cuda=11.8 -c pytorch -c nvidia
                                
                                1 Reply Last reply Reply Quote 1
                                • First post
                                  Last post

                                22

                                Online

                                1.7k

                                Users

                                11.8k

                                Topics

                                102.4k

                                Posts