for the past month-ish I've been hitting a brick wall trying to get the whole "additive synth + noise" thing down to recreate real instruments at 1/100th the file size (and more round robins, user customizability etc)
I tried Synthgroups, SNEX, insanely large scriptnode networks (including a python script that generates the XML sheets) and the new shiny neural node, all with limited success
it didn't occur to me until today to try the Wavetable Synth, and just use the (already extracted) Loris Partials to reconstruct the "tonal" part of the sound, I've just setup a basic project with this (now very obvious) method and it sounds pretty close to the real thing, once i've done some tweaking I'll post examples as well as a more in-depth writeup but basically:
- tiny file sizes (no more 50gb libraries)
- MORE round robins (despite only being a few dozen megabytes)
- preset "tones" or variations of the instrument (using different wavetables of the same recording)
- exposed params that the end user can tweak / play with all they want
audio demos will go here once I figure out how to embed the wavetables in the exported Rhapsody instrument