HISE Logo Forum
    • Categories
    • Register
    • Login

    8 Times more CPU consumption on Aida-X Neural Models

    Scheduled Pinned Locked Moved General Questions
    48 Posts 14 Posters 2.4k Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • orangeO
      orange
      last edited by orange

      @Christoph-Hart

      I can load the Aida-X neural model in HISE. But I observed a performance problem.

      When I load the same model into the Aida-X plugin and compare it, I see that the HISE exported plugin consumes 8 times more CPU, the same model is installed and there is nothing extra. This is obviously quite a lot.

      The test video:

      develop Branch / XCode 13.1
      macOS Monterey / M1 Max

      Christoph HartC C DabDabD 3 Replies Last reply Reply Quote 1
      • Christoph HartC
        Christoph Hart @orange
        last edited by Christoph Hart

        @orange the neural node will automatically process stereo with two independent networks so that's twice the CPU, but apart from that I'm not doing too much special here and the bulk of the CPU load is coming from the RTNeural library, which should be as fast as it can be.

        But I also noticed quite a huge CPU impact when testing out the new NAM model with RTNeural (about 10% CPU in Release mode and completely unusable in Debug mode with 300%+ CPU).

        Are you oversampling? Also can you check whether AIDA-X uses multithreading to shovel the real CPU usage under the carpet (like we're doing with the convolution background thread rendering?)

        orangeO C 2 Replies Last reply Reply Quote 0
        • orangeO
          orange @Christoph Hart
          last edited by orange

          @Christoph-Hart

          I definitely did not oversample.

          From what I've seen, Aida-X doesn't do this with multithreading.

          I think this performance difference is related to the backend method used in Aida-X plugin, because it is already known that Aida-X works faster than NAM. It might be a good idea to examine Aida-X's open source method and optimize it in here.

          To reduce the headroom in HISE, I compiled the Custom node and used it on HardcodedMasterFX, but it didn't make much of a difference.

          This performance difference is really high. Especially when we consider the Pedal (Neural) > Amplifier (Neural) > Post FX chain, the resulting consumption will be very high.

          Screen Shot 2024-12-09 at 11.27.28.png

          develop Branch / XCode 13.1
          macOS Monterey / M1 Max

          Christoph HartC 1 Reply Last reply Reply Quote 1
          • Christoph HartC
            Christoph Hart @orange
            last edited by

            Behind the scenes AIDA-X uses RTNeural, which does the heavy lifting for us.

            Apparently they also use RTNeural, so not sure where that difference is coming from.

            You can use different backends for RTNeural (Eigen or XIMD), HISE uses XIMD, but the difference should be in the 0% - 10% range, not 800%.

            I think going forward with the neural stuff we need a collection of different models I can test against as well as a list of "competitor" implementations (like Aida-X etc). Currently I just threw in my self-baked sinus generator model I made with PyTorch, invited some guests over to celebrate that it doesn't crash and then moved on to the next shiny thing...

            orangeO hisefiloH 2 Replies Last reply Reply Quote 2
            • orangeO
              orange @Christoph Hart
              last edited by orange

              @Christoph-Hart

              Here is a list of models that I picked. You can DOWNLOAD FROM HERE

              Here is a great place to find tons of guitar amp/pedal models for free: https://tonehunt.org

              Aida-X and NAM are the top used for guitar amplifiers. It would be perfect if we could open these models directly in HISE and use them with a lower CPU.

              In my opinion, Aida-X is the simplest to use and train.
              But NAM's library is greater and more accurate sound with higher CPU usage.

              develop Branch / XCode 13.1
              macOS Monterey / M1 Max

              1 Reply Last reply Reply Quote 1
              • hisefiloH
                hisefilo @Christoph Hart
                last edited by

                Here's a self-trained in AIDA_X_Model_Trainer.ipynb. Ugly as hell, but it loads and compiles.

                aidax_LSTM-16-0.aidax.zip

                1 Reply Last reply Reply Quote 1
                • C
                  ccbl @orange
                  last edited by

                  I can confirm the same findings anecdotally. I haven't stuck it up on a performance tester, but I have to raise my buffer size in a HISE implementation where I could run multiple models at 32 samples in AIDA-X.

                  Here's a bunch of models for you to test, there's a couple different sizes here.
                  keras-models.zip

                  1 Reply Last reply Reply Quote 2
                  • C
                    ccbl @Christoph Hart
                    last edited by

                    @Christoph-Hart NAM models, at least the standard ones, are just heavier than the average LSTM model used. A lot of the LSTM implementations have been optimised to run on a MOD Dward or Rpi. So I think 10% more CPU for NAM models sounds about right. Though there are actually smaller NAM models around that might compare more favorably. The "Lite", "Feather", and "Nano" ones. If you want any of those for testing let me know, I'll train some up for you.

                    C 1 Reply Last reply Reply Quote 0
                    • C
                      ccbl @ccbl
                      last edited by

                      @ccbl Is it possible the overhead is from defining the model in the script instead of loading it from a file?

                      1 Reply Last reply Reply Quote 0
                      • DabDabD
                        DabDab @orange
                        last edited by

                        @orange said in 8 Times more CPU consumption on Aida-X Neural Models:

                        I can load the Aida-X neural model in HISE

                        Can you pleas make a guide video or post how to use neural network or model in HISE ? i am very excited to learn the new technology but didn't get proper resources. I created a thread on this forum but no luck.

                        Bollywood Music Producer and Trance Producer.

                        griffinboyG orangeO 2 Replies Last reply Reply Quote 0
                        • griffinboyG
                          griffinboy @DabDab
                          last edited by

                          @DabDab

                          I've done it with tensorflow using google Colab. I can help you get started if you just need to get your foot in the door.

                          DabDabD 1 Reply Last reply Reply Quote 1
                          • DabDabD
                            DabDab @griffinboy
                            last edited by DabDab

                            @griffinboy said in 8 Times more CPU consumption on Aida-X Neural Models:

                            google Colab

                            Oh.. nice. I am at the door. Ting Tong. 😀 Let me know what are the required steps. because I never tried google Colab and all that you have written.

                            Bollywood Music Producer and Trance Producer.

                            1 Reply Last reply Reply Quote 0
                            • orangeO
                              orange @DabDab
                              last edited by orange

                              @DabDab I haven't trained any model yet, but loading models is done like this:

                              • Open the Neural Sine synth example in the Snippet Browser. Delete the synth.
                              • Then replace the sine example with one of the models above (the models are in json format, just copy/paste).
                              • Then open the neural node in FX as Scriptnode, select the model then you're good to go.

                              When you replace the neural network, you might need to restart HISE after saving, sometimes it doesn't update.

                              develop Branch / XCode 13.1
                              macOS Monterey / M1 Max

                              C DabDabD 2 Replies Last reply Reply Quote 2
                              • C
                                ccbl @orange
                                last edited by

                                Something else coming back to the performance difference. I vaguely remember saying that there were optimisations pre-compiled when it came to inferencing certain architecture sizes. Given that Aida for example has a pretty specific pipeline that people use on collab, maybe they optimised that specific number of Layers and Hidden Size?

                                Christoph HartC 1 Reply Last reply Reply Quote 0
                                • Christoph HartC
                                  Christoph Hart @ccbl
                                  last edited by

                                  @ccbl yes but none of that should cause an 8x performance boost (more like 20% or so). I just need to profile it and find out where it's spending its time.

                                  C 2 Replies Last reply Reply Quote 1
                                  • C
                                    ccbl @Christoph Hart
                                    last edited by

                                    @Christoph-Hart cool. Well like I said, if you want any kind of models for testing, let me know. I can do either NAM-wavenet or LSTM, of any size.

                                    1 Reply Last reply Reply Quote 0
                                    • C
                                      ccbl @Christoph Hart
                                      last edited by

                                      @Christoph-Hart One thing, in the Neural Example in the docs it says this "It requires HISE to be built with the RTNeural framework to enable real time inferencing..."

                                      I don't remember doing this explicitly when I built HISE, I just built it the standard way and it all works. I'm assuming this is just no longer a requirement? Otherwise could it explain the performance penalty?

                                      1 Reply Last reply Reply Quote 0
                                      • DabDabD
                                        DabDab @orange
                                        last edited by

                                        @orange said in 8 Times more CPU consumption on Aida-X Neural Models:

                                        @DabDab I haven't trained any model yet, but loading models is done like this:

                                        • Open the Neural Sine synth example in the Snippet Browser. Delete the synth.
                                        • Then replace the sine example with one of the models above (the models are in json format, just copy/paste).
                                        • Then open the neural node in FX as Scriptnode, select the model then you're good to go.

                                        When you replace the neural network, you might need to restart HISE after saving, sometimes it doesn't update.

                                        Wow... i will give it a try. Later I will need @griffinboy help.

                                        Bollywood Music Producer and Trance Producer.

                                        1 Reply Last reply Reply Quote 0
                                        • S
                                          scottmire
                                          last edited by

                                          @orange I am probably doing something wrong, but when I try to make this work, HISE crashes. Here is my process:

                                          • I copy the NN code from the Sine synth example into the interface onInit script in my project
                                          • I replace everything in the obj declaration with the .json from one of your AIDA-X captures
                                          • I add a scriptnode math.neural node in FX
                                          • I select the NN obj in the dropdown
                                          • Hise crashes.

                                          Any thoughts?

                                          1 Reply Last reply Reply Quote 0
                                          • JulesVJ
                                            JulesV
                                            last edited by JulesV

                                            @Christoph-Hart Has there been a fix for this?

                                            Christoph HartC 1 Reply Last reply Reply Quote 0
                                            • First post
                                              Last post

                                            50

                                            Online

                                            1.7k

                                            Users

                                            11.7k

                                            Topics

                                            102.0k

                                            Posts