Forum
    • Categories
    • Register
    • Login

    Agentic coding workflows

    Scheduled Pinned Locked Moved AI discussion
    14 Posts 6 Posters 177 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Christoph HartC
      Christoph Hart @HISEnberg
      last edited by

      @HISEnberg I need to watch out that I don't start like my LLM responding to my input, but "Excellent feedback!" :)

      All LLMs are not great with HISEscript still.

      The knowledge gap between HiseScript and any other programming language is a temporary one. Sure only the big models somehow are fed with HiseScript and can one-shot you scripts that are (mostly) correct without any guidance, but that is precisely the pre-agent state of the art that I assumed will guard us from the robots taking over.

      I think I need to catch you guys a bit up with what I did last week so that you can fully understand the fundamental change this will bring: the key to solving the hallucination problem (again, which I thought is deeply embedded in the very nature of LLMs) is to ensure that the context window of the LLM contains either the full set of API methods, language paradigms and properties (wasteful), or instruct the LLM to always query an existing set of resources before writing a single line of code.

      There are multiple ways to achieve that efficiently and I came up with writing a custom MCP server that the agent can query and fetch the correct API method names & signature as well as component properties, LAF obj properties. including description. Then a reference guideline just needs to say: "WHENEVER YOU WRITE LAF CODE, LOOK UP THE FUNCTION NAME AND PROPERTIES!!!!!", give it the source and the LLM will lazily pull whatever it needs into the context window.

      https://github.com/christoph-hart/hise_mcp_server

      This provides the LLM with the always up to date (because automatically extracted from the source code) property list in a JSON format, eg. here for the LAF properties:

      https://github.com/christoph-hart/hise_mcp_server/blob/master/data/laf_style_guide.json

      Then whenever the user prompts something blablabla with LAF, it will pull in this style guide:

      https://github.com/christoph-hart/hise_mcp_server/blob/2c37d5046414667d8f5229a5e2d3ece529225308/src/style-guides.ts#L114

      read it and understand what it can call.

      Running Claude Code, editing your scripts in an external IDE while recompiling in HISE is convulted.

      The MCP server can talk to HISE via a Rest API and send recompile messages, then get the output back as HTTP response and change the code until it compiles. This is the core development loop that powers that entire thing:

      https://awesomeclaude.ai/ralph-wiggum

      Christoph HartC HISEnbergH 2 Replies Last reply Reply Quote 2
      • Christoph HartC
        Christoph Hart @Christoph Hart
        last edited by

        The XML/node structure of HISE is ripe for AI collaboration I believe.

        Actually, the XML structure is not the best target for this as this is a stale file that doesn't reflect the actual state in the HISE instance. The MCP server can query the entire component tree and fetch the properties, so you can select (!) some buttons in HISE just tell him:

        "Move the button row to the top"

        The agent will then call hise_runtime_get_selected_components, which returns a list of JSON objects for each selected component (like the actual selection (!) you did in the interface designer), apply the modification and send it back using hise_runtime_set_component_properties. then call hise_runtime_get_component_properties to fetch the current state again and verify it against it's expected position. This is a trivial example and the verification loop is unnecessary here, but you can imagine how that scales up to be a real workflow boost.

        dannytaurusD 2 Replies Last reply Reply Quote 1
        • dannytaurusD
          dannytaurus @Christoph Hart
          last edited by

          @Christoph-Hart I find ChatGPT to be surprisingly good at generating HISE scripts, but that might be the accumulation of dozens of prompts and my resulting corrections.

          Also, I use Claude in a Cursor project of the full HISE repo. I ask it things like "In HISE script, how would I do XYZ?" and it usually gives a very good answer.

          I believe that asking a model that has access to BOTH the full HISE source AND your current project would produce high-quality results.

          Meat Beats: https://meatbeats.com
          Klippr Video: https://klippr.video

          1 Reply Last reply Reply Quote 1
          • dannytaurusD
            dannytaurus @Christoph Hart
            last edited by dannytaurus

            @Christoph-Hart One of the reasons that AI coding works so well with Ruby on Rails web apps is because Rails has a huge amount of convention. It's heavily opinionated, uses standard libraries for lots of stuff and has a clearly defined directory structure.

            This means AI spends way less time (and tokens) deciding HOW and WHERE to implement a feature than in something like a vanilla Node project, etc.

            HISE has a similarly opinionated and well-defined structure, so I think that can be used to our advantage.

            Meat Beats: https://meatbeats.com
            Klippr Video: https://klippr.video

            1 Reply Last reply Reply Quote 0
            • dannytaurusD
              dannytaurus
              last edited by

              I'll also say that although I've seen many horror stories about how much people are spending on AI coding subscriptions, I'm working very happily within a $20/month Cursor plan.

              I use Cursor for 2-4 hours a day, pretty much every day, and I haven't hit the plan max since I started paying 18 months ago (Aug 2024).

              I do think the fact that I'm mostly using it to work on a Rails app (see post above) helps moderate token usage, and also I keep conversations fairly short, moving to a new chat when the context changes.

              Thorough rules files help too, although I'm still fighting to get it to follow certain rules. Seems a bit picky!

              Meat Beats: https://meatbeats.com
              Klippr Video: https://klippr.video

              Christoph HartC 1 Reply Last reply Reply Quote 1
              • Christoph HartC
                Christoph Hart @dannytaurus
                last edited by

                @dannytaurus lol I burned through the 100 bucks Claude Max subscription in 3 days.

                dannytaurusD 1 Reply Last reply Reply Quote 1
                • dannytaurusD
                  dannytaurus @Christoph Hart
                  last edited by

                  @Christoph-Hart Haha, you're in the honeymoon period! 😂

                  And I suspect you're diving way deeper with it than most of us are.

                  Meat Beats: https://meatbeats.com
                  Klippr Video: https://klippr.video

                  1 Reply Last reply Reply Quote 0
                  • HISEnbergH
                    HISEnberg @Christoph Hart
                    last edited by

                    @Christoph-Hart Woah I am gonna have to try this out 🤤

                    Sonic Architect && Software Mercenary

                    Christoph HartC 1 Reply Last reply Reply Quote 0
                    • Christoph HartC
                      Christoph Hart @HISEnberg
                      last edited by

                      @HISEnberg sure I'm holding back about 50-60 commits with that stuff, but as soon as the MVP is solid I'll let you all play with this. The current feature I'm toying around with is the ability to create a list of semantic events that will create a automated mouse interaction:

                      {
                        { type: "click", id: "Button1", delay: 0 }
                        { type: "drag", id: "Knob1", delta: [50, 60] , delay: 500 },
                        { type: "screenshot", crop: "Knob1" }
                      }
                      

                      AI agents sends this list to HISE, HISE opens a new window with an interface, simulates the mouse interaction that clicks on button1 ( had to hack the low-level mouse handling in JUCE to simulate a real input device for that), then drags knob1 and makes a screenshot. Evaluates that the expected screenshot uses the code path of the LAF it has send to HISE previously, then realizes that the path bounds need to be scaled differently and retries it until the screenshot matches its expected outcome. If it just needs to know whether the knob drag updated the reverb parameter, it fetches the module list from HISE and looks at the SimpleReverb's attribute directly - this won't waste context window space for the image analysis.

                      It's that feedback loop that allows it to autonomously iterate to the "correct" solution (aka ralph loop), so even if it only gets it 90% of the time right, it will approximate to 99.9% after three rounds of this.

                      1 Reply Last reply Reply Quote 0
                      • iamlampreyI
                        iamlamprey
                        last edited by

                        This was all a big surprise to wake up to.

                        Altar had quite a bit of vibe-coding involved, specifically on the modular drag-n-drop FX panels, and some of the third-party DSP nodes. I used various models available on Github Copilot (mostly Claude) because it seems to handle multiple files / larger context window better than the standalone models.

                        That being said, it still required quite a bit of manual fixing up. A common one mentioned above was using var instead of local with inline functions etc, and it would default to using JUCE buffers instead of the templated SNEX methods which I assume comes with a performance hit.

                        I think a good idea would be to establish some sort of global guiding prompt which includes some of the most important best practices for HISE and appending that to individual case-specific prompts eg:

                        "Use this bestPractices.md to familiarize yourself with the HISE codebase, remember to make use of the existing templated functions in the auto-generated files and only use JUCE methods as a fallback" etc etc

                        LindonL 1 Reply Last reply Reply Quote 0
                        • LindonL
                          Lindon @iamlamprey
                          last edited by

                          @iamlamprey said in Agentic coding workflows:

                          "Use this bestPractices.md to familiarize yourself with the HISE codebase, remember to make use of the existing templated functions in the auto-generated files and only use JUCE methods as a fallback" etc etc

                          this.

                          Perhaps it might be worth thinking about how this is integrated in to the agents approach for every prompt it gets issued, a little like the MCP server interaction above...

                          Big warning: Im even more of a noob at this stuff than I think anyone here - I havent used AI for any coding work - so watch out I may be talking bullsh*t

                          HISE Development for hire.
                          www.channelrobot.com

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post

                          19

                          Online

                          2.2k

                          Users

                          13.4k

                          Topics

                          116.5k

                          Posts