Checking the HISE Source with AI?
-
I haven't been around for a year, so I don't know where this discussion is. I am wondering—
-
Has the HISE Source Code been worked through to help address potential and existing issues in the code, especially the types that are challenging for people to find. I realise that things aren't that simple, with one reason being that @Christoph-Hart has a roadmap for everything, and this might represent an immediate detour.
-
Likewise question about using it to document HISE.
-
I have created a HISE language model for the AI coding models that seems to understand the HISE language, with all with its quirks and edge-cases. In my experience, I no longer wonder how to do something, or why something isn't working. Thus far, it has written all my code for me, and I haven't had to make any code-level corrections, and am very pleased with what it has written, both stylistically and in terms of patterns.
EDIT:
@Christoph-Hart, @David-Healey
Obviously, people will ask me to share this model (which isn't really the right word). I'm hesitant, because there's a whole bunch of people here that earn their living writing HISE plugins, and I don't want to disrupt that. It might also encourage people who don't have enough knowledge to effectively use it, and give them a bad experience with HISE. And I also don't know what I don't know about this, and haven't thought it through.
I'm in the process of sharing it with @David-Healey, and am of course open to sharing it with @Christoph-Hart. As to what becomes of it, I would like to leave it in their hands. In all likelihood, it would need additional work to be better generalised, and might be better used first for documentation, work on the HISE source—these are the types of things they are experts in (and many other things, too).
I need to prepare for NAMM, and then will be at NAMM, so I may be quiet here about this until I'm back. Also, I've edited my original verbiage to be more measured (as I so often do here, lol).
-
-
-
Nope
-
Nope
-
-
@David-Healey Do you know why not? [Edited for emotion]
Oh, and Hi Dave, nice to see you.
-
@clevername27 I couldn't tell you
-
@David-Healey I would do the first one, but @Christoph-Hart really needs to do it; If I upload a million patches, he will go nuts. It would only take him a week, at most, so I am puzzled as to why he hasn't done it…although I don't know (thus the puzzlement).
It would solve the other problems, so quickly.Those models can also finish all the things that are incomplete in HISE, very quickly. Likewise, they can generate complete documentation—and automatically keep it up to date. That way, we'd finally have a fully-functioning, documented HISE, at all times.
-
@clevername27 I think you have more faith in AI than I do.
-
HISE is a big codebase with a variety of dependencies, I think the risk of hallucination & context-rot is a lot bigger than your average JUCE plugin.
There's also a lot of templated functions, specific versions of things like JUCE and the C++ version HISE uses which LLMs usually don't "keep in mind".
Also GLM4.7 has noticeably outperformed Claude & GPT 5.2, at least in my own tests

I'm sure Christoph uses some of these tools occasionally, but I think if he started depending on them for the short-term benefit we'd probably end up with a lot more bugs in the long term
-
@clevername27 I've been fixing a few isolated bugs lately, with the help of Claude in Cursor.
From doing that work, I can say with a degree of certainty, that it's not a good idea to let any AI go wild fixing bugs in the whole codebase right now.
-
I've been reviewing & merging some of the pull requests from Danny yesterday and I really appreciate the work here which filled a lot of blind spots that I didn't see.
That being said the fixes I merged are on surface level at best - usually one liners that remove a quirk in the HISE UX or a simple bug fix or simple additions that work within a very tight scope. Again, these really nice and I'm sure it removes a lot of pain points for many people.
I haven't merged a few of those on the other hand - mostly because they interfere with something that causes backwards compatibility or are potentially problematic in use cases other than the one that is immediately addressing the problem (eg this one here). Merging those require a lot of testing & different edge case handling so you can imagine that the time to find & fix the bug (regardless of whether AI was used or not) is about a tenth of the time to actually test whether this is a safe change.
Now the idea of just letting AI slop take over and completely butcher the HISE codebase "in a matter" of hours is a frightening thought - not only would I loose authorship over the codebase so it becomes a black box to everybody, we can safely assume that the AI definitely does not have the "gut feeling" of recognizing "problematic" fixes like the one linked above vs. unproblematic ones (the ones I merged yesterday). Also I'm pretty confident that this won't be solved by the next iteration of LLM models.
Someone has created a HISE language model for the AI coding models that understands the HISE language, with all with its quirks and edge-cases.
While I see the benefit of AI in enhancing the documentation to make it more approachable (and we're experimenting with different things there too), the idea of a LLM model that spits out HiseScript code is a bit contrary to the entire concept of HISE. If you want to vibecode your way to a plugin, just use JUCE with a webview, these are two mature codebases with lots of existing code examples so that the LLM output is much more robust.
-
@Christoph-Hart Agreed, backwards compatibility in HISE is something I'm not very familiar with, still being fairly new to the codebase, and it's something AI has literally no idea about.
-
@Christoph-Hart said in Checking the HISE Source with AI?:
If you want to vibecode your way to a plugin, just use JUCE with a webview, these are two mature codebases with lots of existing code examples so that the LLM output is much more robust.
Well said, whether you meant is sincerely or sarcastically!

People are vibe coding fully-functional iOS apps in hours now, instead of months.
-
@Christoph-Hart I appreciate that, and thank you for your detailed response. But I already built the model, and it has been working perfectly. It finished the plugins I was unable to finish going back a year (because the functionality I needed was undocumented), and it has since written two other highly-complex plugins. You have to create a model (and I use the term loosely) for HISE, first; out of the box, the existing coding models are of little value for HISE development.
-
@dannytaurus I can say with a great deal of certainty that I already created it, and I haven't written a single line of code, since. I don't mean to be snarky; it's just the certitude I'm responding to. I wish you the very best in your efforts, and am happy to help you in any way I can.
-
@clevername27
You created an AI model that generates the complete code for a complex plug-in, and since then you haven't had to write a single line yourself? Does that mean you didn't have to change anything in the code generated by the model? Am I understanding that correctly? -
@clevername27 Just to be clear - I was talking about using AI to knock out bugs in the HISE codebase en-masse. Building a plugin is an entirely different endeavour.
Glad to hear you've got such a great model working for that.
I agree about the undocumented stuff, by the way. Giving the AI the full codebase then querying it on features and techniques can be extremely effective.
-
You have to create a model (and I use the term loosely) for HISE, first; out of the box, the existing coding models are of little value for HISE development.
Interesting, how did you do that? Did you use a pretrained model or just added some context data? I did some experiments where I fed it a JSON formatted output with all code examples from the doc codebase, but I used a very simple local LLM to check it so the results were subpar.
-
@Oli-Ullmann said in Checking the HISE Source with AI?:
You created an AI model that generates the complete code for a complex plug-in, and since then you haven't had to write a single line yourself? Does that mean you didn't have to change anything in the code generated by the model? Am I understanding that correctly?
I'm using the word "model" in a very general sense. It's not an LLM model, per se. It's a set of data files that enables the AI to understand HISE, and contextualise it within the larger body of available knowledge of JUCE, audio development, etc.
I don't want to programme plugins. I want to develop them. The model helps me design them better, and writes all the source code. It learned some its patterns from @David-Healey, and produces beautiful, clear, and well-architected output.
But it's not magic, and many things in plugin development (as we all know) are only revealed through the process of development. You still need a fundamental knowledge of HISE, C++, audio development, and software engineering. Otherwise, you have nothing to talk about with it. For example, I asked it to refactor one of my plugins to use Broadcasters—it was helpful for me to tell it how, where and why I wanted this done. At another point, it looked at David's use of namespaces, and suggest I use that architecture.
Obviously, the model needs persistence between Agent sessions. And it needs to do this in a way that minimises token usage. Otherwise, by the time it loads everything up, you don't have enough tokens to actually do your work. And even then, the model is almost infinitely larger than can fit in the available token space (and with respect to other limitations). So, my model includes strategies to inform decision-making on whether it can act with what it knows, or whether it needs to access the model for additional information, and how to do that without forgetting what it needed with to begin with.
Needless, to say, you need to max out everything in Cursor that increases the Token capacity. And you want to use chat-GPT because it excels at this type of learning.
-
@Christoph-Hart It's too much to explain in a forum posting, but I'm happy to share all the information with you in some other format.
Yes, some of the files are JSON-formatted data. Others are MD-formatted. I like the MD stuff because it's human-readable. However, the JSON stuff is more efficient for the LLM, so it's a mix. The MD stuff is usually stuff I work together with the LLM on.
I really am astonished at how well it works.
-
@dannytaurus One thing I'd suggest is being clear with the AI that HiseScript is not JavaScript—in order for the LLM to be useful, it needs to understand every aspect of HISE. Otherwise, it's just vibe coding, which (personally) I think is useless. Either it works or it doesn't.