Forum
    • Categories
    • Register
    • Login

    Checking the HISE Source with AI?

    Scheduled Pinned Locked Moved General Questions
    20 Posts 6 Posters 187 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • dannytaurusD
      dannytaurus @Christoph Hart
      last edited by

      @Christoph-Hart Agreed, backwards compatibility in HISE is something I'm not very familiar with, still being fairly new to the codebase, and it's something AI has literally no idea about.

      Meat Beats: https://meatbeats.com
      Klippr Video: https://klippr.video

      C 1 Reply Last reply Reply Quote 0
      • dannytaurusD
        dannytaurus @Christoph Hart
        last edited by

        @Christoph-Hart said in Checking the HISE Source with AI?:

        If you want to vibecode your way to a plugin, just use JUCE with a webview, these are two mature codebases with lots of existing code examples so that the LLM output is much more robust.

        Well said, whether you meant is sincerely or sarcastically! 😜

        People are vibe coding fully-functional iOS apps in hours now, instead of months.

        Meat Beats: https://meatbeats.com
        Klippr Video: https://klippr.video

        1 Reply Last reply Reply Quote 0
        • C
          clevername27 @Christoph Hart
          last edited by clevername27

          @Christoph-Hart I appreciate that, and thank you for your detailed response. But I already built the model, and it has been working perfectly. It finished the plugins I was unable to finish going back a year (because the functionality I needed was undocumented), and it has since written two other highly-complex plugins. You have to create a model (and I use the term loosely) for HISE, first; out of the box, the existing coding models are of little value for HISE development.

          1 Reply Last reply Reply Quote 0
          • C
            clevername27 @dannytaurus
            last edited by clevername27

            @dannytaurus I can say with a great deal of certainty that I already created it, and I haven't written a single line of code, since. I don't mean to be snarky; it's just the certitude I'm responding to. I wish you the very best in your efforts, and am happy to help you in any way I can.

            Oli UllmannO dannytaurusD 2 Replies Last reply Reply Quote 0
            • Oli UllmannO
              Oli Ullmann @clevername27
              last edited by

              @clevername27
              You created an AI model that generates the complete code for a complex plug-in, and since then you haven't had to write a single line yourself? Does that mean you didn't have to change anything in the code generated by the model? Am I understanding that correctly?

              1 Reply Last reply Reply Quote 0
              • dannytaurusD
                dannytaurus @clevername27
                last edited by

                @clevername27 Just to be clear - I was talking about using AI to knock out bugs in the HISE codebase en-masse. Building a plugin is an entirely different endeavour.

                Glad to hear you've got such a great model working for that.

                I agree about the undocumented stuff, by the way. Giving the AI the full codebase then querying it on features and techniques can be extremely effective.

                Meat Beats: https://meatbeats.com
                Klippr Video: https://klippr.video

                Christoph HartC C 2 Replies Last reply Reply Quote 2
                • Christoph HartC
                  Christoph Hart @dannytaurus
                  last edited by

                  You have to create a model (and I use the term loosely) for HISE, first; out of the box, the existing coding models are of little value for HISE development.

                  Interesting, how did you do that? Did you use a pretrained model or just added some context data? I did some experiments where I fed it a JSON formatted output with all code examples from the doc codebase, but I used a very simple local LLM to check it so the results were subpar.

                  C 1 Reply Last reply Reply Quote 0
                  • C
                    clevername27 @dannytaurus
                    last edited by

                    @dannytaurus

                    @Oli-Ullmann said in Checking the HISE Source with AI?:

                    You created an AI model that generates the complete code for a complex plug-in, and since then you haven't had to write a single line yourself? Does that mean you didn't have to change anything in the code generated by the model? Am I understanding that correctly?

                    I'm using the word "model" in a very general sense. It's not an LLM model, per se. It's a set of data files that enables the AI to understand HISE, and contextualise it within the larger body of available knowledge of JUCE, audio development, etc.

                    I don't want to programme plugins. I want to develop them. The model helps me design them better, and writes all the source code. It learned some its patterns from @David-Healey, and produces beautiful, clear, and well-architected output.

                    But it's not magic, and many things in plugin development (as we all know) are only revealed through the process of development. You still need a fundamental knowledge of HISE, C++, audio development, and software engineering. Otherwise, you have nothing to talk about with it. For example, I asked it to refactor one of my plugins to use Broadcasters—it was helpful for me to tell it how, where and why I wanted this done. At another point, it looked at David's use of namespaces, and suggest I use that architecture.

                    Obviously, the model needs persistence between Agent sessions. And it needs to do this in a way that minimises token usage. Otherwise, by the time it loads everything up, you don't have enough tokens to actually do your work. And even then, the model is almost infinitely larger than can fit in the available token space (and with respect to other limitations). So, my model includes strategies to inform decision-making on whether it can act with what it knows, or whether it needs to access the model for additional information, and how to do that without forgetting what it needed with to begin with.

                    Needless, to say, you need to max out everything in Cursor that increases the Token capacity. And you want to use chat-GPT because it excels at this type of learning.

                    1 Reply Last reply Reply Quote 2
                    • C
                      clevername27 @Christoph Hart
                      last edited by

                      @Christoph-Hart It's too much to explain in a forum posting, but I'm happy to share all the information with you in some other format.

                      Yes, some of the files are JSON-formatted data. Others are MD-formatted. I like the MD stuff because it's human-readable. However, the JSON stuff is more efficient for the LLM, so it's a mix. The MD stuff is usually stuff I work together with the LLM on.

                      I really am astonished at how well it works.

                      1 Reply Last reply Reply Quote 0
                      • C
                        clevername27 @dannytaurus
                        last edited by

                        @dannytaurus One thing I'd suggest is being clear with the AI that HiseScript is not JavaScript—in order for the LLM to be useful, it needs to understand every aspect of HISE. Otherwise, it's just vibe coding, which (personally) I think is useless. Either it works or it doesn't.

                        David HealeyD 1 Reply Last reply Reply Quote 0
                        • David HealeyD
                          David Healey @clevername27
                          last edited by

                          @clevername27 I just realised a problem with this. You've trained it on some of my Rhapsody code, which means it is going to produce code that follows similar patterns. But the next version of Rhapsody that I'm working on currently is using new patterns, so your code will be following old ways of doing things.

                          I also wonder about its use of broadcasters if you're working with the develop branch. Because I'm doing some stuff with broadcasters that relies on some customisations I made in my fork, if your AI isn't aware of those changes it might give confusing results.

                          I guess what it comes down to is AI doesn't innovate, it replicates.

                          Free HISE Bootcamp Full Course for beginners.
                          YouTube Channel - Public HISE tutorials
                          My Patreon - HISE tutorials

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post

                          31

                          Online

                          2.1k

                          Users

                          13.2k

                          Topics

                          114.7k

                          Posts