Checking the HISE Source with AI?
-
I haven't been around for a year, so I don't know where this discussion is.
Am I correct to assume:
-
Someone has gone through the HISE Source Code with both chat-GPT's 5.2 Highest model, and likewise for Claude? That will fix most bugs in a matter of days.
-
Someone has created a HISE language model for the AI coding models that understands the HISE language, with all with its quirks and edge-cases. So there's no more need to ever wonder how to do something, or why something isn't working.
Yes?
Hope everyone has been well.
-
-
-
Nope
-
Nope
-
-
@David-Healey Do you know why not? [Edited for emotion]
Oh, and Hi Dave, nice to see you.
-
@clevername27 I couldn't tell you
-
@David-Healey I would do the first one, but @Christoph-Hart really needs to do it; If I upload a million patches, he will go nuts. It would only take him a week, at most, so I am puzzled as to why he hasn't done it…although I don't know (thus the puzzlement).
It would solve the other problems, so quickly.Those models can also finish all the things that are incomplete in HISE, very quickly. Likewise, they can generate complete documentation—and automatically keep it up to date. That way, we'd finally have a fully-functioning, documented HISE, at all times.
-
@clevername27 I think you have more faith in AI than I do.
-
HISE is a big codebase with a variety of dependencies, I think the risk of hallucination & context-rot is a lot bigger than your average JUCE plugin.
There's also a lot of templated functions, specific versions of things like JUCE and the C++ version HISE uses which LLMs usually don't "keep in mind".
Also GLM4.7 has noticeably outperformed Claude & GPT 5.2, at least in my own tests

I'm sure Christoph uses some of these tools occasionally, but I think if he started depending on them for the short-term benefit we'd probably end up with a lot more bugs in the long term
-
@clevername27 I've been fixing a few isolated bugs lately, with the help of Claude in Cursor.
From doing that work, I can say with a degree of certainty, that it's not a good idea to let any AI go wild fixing bugs in the whole codebase right now.