@orange we could also just make our own with the neural
node
Posts made by iamlamprey
-
RE: NEATBrain Writeup
-
RE: NEATBrain Writeup
elevenlabs, the first month is $1 and the voice I used was (i think) "Lily"
it took a few generations until I was happy with the cadence
-
RE: Rhapsody crash when parametric eq added
Okay following up here:
- I was able to recreate the crash with unloading while the floatingTile was visible
- Disabling
Engine.addModuleStateToUserPreset
still crashes - Adding
showControl(false)
to the boilerplate does seem to at least reduce the number of crashes (I still had 1 or 2)
In my case, neither floatingTile had the Spectrum Analyzer active, and the number of active EQ bands didn't seem to change anything (including 0)
-
RE: NEATBrain Writeup
thanks for the kind words everyone!
@oskarsh said:
Do you have a sound demo of this?
There's a couple already on the website and I plan to add some more:
-
RE: Rhapsody crash when parametric eq added
@d-healey said:
Does it crash when not using this?
was that to me or ospfeigrp? I haven't had any crashes yet, just audio dropouts
@ospfeigrp said:
try this snip and export it:
I'll give it a try now, and also see if I can recreate your crash issue with the floating tile open
-
NEATBrain Writeup
Hello!
I promised a full writeup on the new type of VI I'm working on, designed from the ground up to solve the following problems I personally have with Sample Libraries:
- Large File Sizes
- Limited Round Robins (Restricted further by large file sizes)
- Limited tone-sculpting & End-User Parameters
- Limited Keyrange
- They're annoying to make
After probably about a year and a half of noodling and trying pretty much every approach (Modal Synthesis, Neural Synthesis, Neural Style Transfer, Karplus-Strong, Wavetable Synthesis etc), I'm finally happy to say I've found a method that solves all of these problems and more.
I'm calling it NEATBrain, and it's the successor to my previous, and now deprecated NEAT Player.
NEATBrain is a tonally consistent hybrid synthesizer for one-shot style instruments such as Guitars, Bass, Tuned and Untuned Percussion (including things like Glocks, Marimbas), Pianos and more. It does this by combining Digital Waveguides & Residue Sampling, but the real magic is the backend I spent several months developing, I'll talk about this a bit later in this writeup.
Digital Waveguide Synthesis
Basically this is what the Wavetable Synth is doing: looping a single cycle of the original waveform by its fundamental frequency. This equation is very simple:
const sampleRate = 44100.0; const root = 440.0; var cycleLength = sampleRate / root;
Assuming our sample is tuned to 440.0hz exactly, we can set the
loopStart
,loopEnd
andfadeTime
parameters of our Sampler to turn it into a Pseudo Wavetable Synthesizer. However because we're using a Sampler, we avoid the instability of the Wavetable Module AND we can easily make use of Round Robins (currently I'm using 15!). We can translate this single-cycle algorithm to every note in the playable range, as long as the samples are tuned perfectly.How do we tune them perfectly? Using
Loris
of course! The backend I mentioned earlier takes a group of Samples, usually about 3 notes, and resynthesizes them across the keyRange without any Pitch-Shifting artifacts. And since we only need the start of the sample (we're looping the waveform indefinitely), we can Truncate (shorten) the audio files to massively save on Memory Usage. I'm using about 20% of the sample length, so I get to keep the "Attack" as well.So now we have a full keyrange of Waveguides derived from a very small group of samples. This makes the development process much more enjoyable for me, since I don't have to throw my back out recording a guitar for 2 hours. But so far we just have sine waves, what about the noisy part of the signal?
Residue Sampling
The Residue is the noisy part of the signal - it's whatever is left behind when Loris resynthesizes a Sample. We can easily grab this as a buffer with the following code:
var file = FileSystem.getFolder(FileSystem.AudioFiles).getChildFile("myCoolWav.wav"); var wt; var rs; var buffer = file.loadAsAudioFile(); var f0 = buffer.detectPitch(sr, buffer.length * .2, buffer.length * .4); // I ended up manually typing pitch values for consistency's sake lorisManager.analyze(file, f0); wt = lorisManager.synthesise(file); // here's our waveguide rs = buffer - wt; // and here's our noisy signal, which we can export to .wav
Now, when we play the Residue alongside the resynthesized sample, we get the original sound (well, 99% of the time lol). So now I have two samplers: one playing the Waveguide, the other playing the Residue, which we just stretch across the keyRange and turn Pitch Tracking OFF.
Organic Waveguide Decay & Idiosyncracies
At this point, our looping Waveguide will play forever and sound quite synth-ey. Luckily a lot of one-shot sounds simply decay in amplitude and high-end frequency content over time (think Karplus-Strong). We can recreate this digitally using a good old AHDSR Module. We can expose the AHDSR controls to the end user, letting them tweak the sound until they're happy (there's some interesting sound-design opportunities here too).
Because we've separated the Waveguide and Residue, we can process them individually. If we just lowpassed a regular sample, the noisy part would get filtered as well, which doesn't sound great. We can also add all kinds of fun things to the Waveguide now, like Modulation.
Real instruments pitch upwards when hit with a high Velocity, and decay back to their original pitch over time. Test it yourself: grab a guitar (or whatever) and pluck the string really hard - you'll see the pitch flex upwards. This can be recreated with yet another AHDSR, this time as a Pitch Modulator. This is where our Synthesizer starts moving closer to the Greybox Physical Modelling realm. We can further extend this with a Pitch Random modulator. In fact, NEATBrain is using an AHDSR, LFO and a Random Modulator for both the Gain and the Pitch.
All of these controls are then exposed to the end user to play with.
Upper Register Frequency Dampening
Recording a low guitar string and pitching it upwards with Loris can make it quite harsh, because Loris retains all of the harmonic information. Initially I tried using a Dynamic EQ in Scriptnode to remedy this, but I ended up just sampling some of the higher notes. Because we're just using Samplers, we don't need to think about Wavetable Index positions or anything like that, just map the higher samples normally and voila - no more harshness.
Palm Mutes
Since the first instruments are a guitar and bass, I simply created additional Waveguides from the Palm Muted samples. I tried doing this with a Scriptnode Network and some Filters, but nothing beats the real thing. Because our Residue is an atonal pick attack, we can re-use the same Residues from the sustained notes, further saving on Memory Usage. The Palm Mutes are triggered by low-velocity keypresses, making the instrument simple and intuitive.
Tone Profiles
Once again, because we've separated the Waveguide and Residue, we can completely change the tone of the instrument without causing weird artifacts. For example, I've setup different body and pickup profiles using Melda's Freeform Equalizer to tone-match two signals and recreating the response curve with a regular Parametric EQ. Now the end-user can freely change the sound of the instrument to help better fit a mix.
FX Keys
Some things just can't be synthesized in an efficient way. Because our instrument has such little memory usage, we can get away with regular sampling for things like Pick Scrapes, Slides, Release Noises and... well I call them "Skiddley-doos" but I think they're referred to as Gojira Scrapes?
100 Articulations? Rhythmic Loops? Sustained Instruments? A Built-in AI-Powered Riff Generator?
Nope. While some of these things are certainly possible, the goal of NEATBrain from the start was to create a simple, lightweight, great-sounding instrument that covers 80-90% of use-cases. From my own experiences with NEAT Player, a lot of those extra features, while certainly cool and sometimes useful, are often overlooked by most users.
Take built-in FX for example. It's a lot of extra work to implement an FX Suite in your instruments, you need to add all of the modules, design the GUI for them AND make sure that they sound great, no matter what settings the end-user might choose. All of this adds CPU overhead, makes debugging more annoying (GUI bugs are one of the most common), AND about 5% of your end users will actually make use of them instead of the plugins they already have.
It's just not worth it (in my opinion).
Articulations and looped phrases are another big one. There's already plenty of Libraries that have all kinds of pre-recorded phrases in 7/8 time-signatures that need to be distributed on Hard Disks because they're over 100GB. If that's what you need for your specific use-case then great, but NEATBrain is not, and never will be, designed to fill that niche.
The Glorious Backend
Okay, let's talk a little bit about the pipeline. I won't be sharing the entire codebase for obvious reasons, but I can mention some things:
Because Loris is INSANE. I can record 15 Round-Robin samples of a single note and repitch them without artifacts. The code looks like this:
var targetPitch = 440.0; inline function repitch(obj) { // This function gets passed to the LorisManager using Loris.ProcessCustom() local ratio = targetPitch / obj.rootFrequency; obj.frequency *= ratio; } lorisManager.processCustom(file, repitch);
Automatic SampleMap Generation
Did you know that sampleMaps can be created / edited programmatically? Using something like this, you can setup sampleMaps instantly after resynthesizing them with Loris (so even less work!).
var importedSample = Sampler1.asSampler().importSamples([path], true); // where [path] is the folder you've stored your "samples" for (s in importedSample) { // I had to use these integers manually, YMMV s.set(2, rootNote); // ROOT s.set(3, highKey); // HIGH s.set(5, lowVel); // VLOW s.set(6, highVel); // VHIGH // There's options for round-robin groups, mic groups etc // Make sure you save the sampleMap! (there's probably a way to do it with code) }
Summary
So there we go, we take a small selection of recorded samples, repitch them and fill any gaps with Loris, truncate (shorten) the samples to take up way less space, automatically create a sampleMap, customize the tone with EQ Matching and finally play them back alongside the Residue for a pretty faithful reconstruction with the following benefits:
- Tiny Filesize / Memory Usage
- More Round Robins
- More end-user control
- Less work (well, less work sampling stuff)
Worth it imo :)
Built for Rhapsody
Huge thanks to Christoph and David for the no-export solution, Rhapsody. These instruments are super easy to deploy for VST & AU for Windows, MacOS and Linux (and soon AAX). No compiling, no codesigning nonsense, it's all handled already.
If you got this far, thanks for reading. I hope this writeup was helpful to those of you that have the same pain-points as me regarding traditional Sample Libraries. There's definitely a time and place for them, but they can also be quite burdensome and restrictive.
-
RE: Rhapsody crash when parametric eq added
@ospfeigrp yeah there's definitely something funky going on with fullExpansions and the PEQ I think, possibly due to
addModuleStateToUserPreset
or the floatingTile but I'm just speculating -
RE: Rhapsody crash when parametric eq added
im getting a sort of similar issue, no crashing but audio is cutting out / freezing when I start messing with the floating tile / PEQ
-
RE: feedback needed :)
@d-healey said in feedback needed :):
As you get higher the notes (even though they're the same) sound brighter.
Yeah this is something I wrestled with for a while, I tried using a single note and spanning it with Loris and catching the harsh frequencies with a dynamic EQ but it just didn't sound the same as sampling the higher string/fret
And obviously there is a huge difference between playing the open strings and playing the same note fretted.
A caveat with this is that (at least in my experience) bright strings are sought after in aggressive music as they help cut through dense mixes, open strings are a lot brighter so I've intentionally used the lower open samples as the "main samples" that take up most of the keyrange
finding a balance between brightness without it being too harsh on the higher notes was key
@orange said
You can use more user friendly general terms rather than using Waveguides and Residue.
Yeah this is definitely something I'm still battling with, I heard Alex Hormozi mention that the average american reads at a 3rd grade level so the writecopy should be fed to a literacy test algorithm. that being said, I think the average VI enthusiast isn't your typical customer, and they're generally more curious about the "story" of an instrument or piece of software, so I'm trying to find a copy that's easy enough to read, while still having some of that technical aspect
Plus "digital waveguide" sounds way cooler than "wavetable synth"
They can download it after Newsletter Subscription.
There's a free tier for the Patreon which can essentially serve as a mailing list, the website needs to be Static to work with github pages, and there's a button inside the plugin to open the store page instantly so I'm trying to avoid "gatekeeping" the download behind a signup, I personally hate the friction and I'm trying to keep it as few steps as possible between discovering the instrument, and having it on the users HDD
I really appreciate the feedback guys! It's all going on the blackboard (btw I bought sticky blackboard paper for like $4 and it's the best investment I've ever made)
-
RE: feedback needed :)
@d-healey said
which will be in the Rhapsody boilerplate soon if you need it
I'll definitely check it out when it's ready and gauge how implementable it is
a good visual way for the user to be able to choose which version of a note is played
i think this is where i probably diverge a bit, even though i come from a guitarist background, a riff or chord was always a group of pitches to me and it didn't matter which strings were involved as long as all of the notes were correct
imo the end listener isn't going to care if the played chord is physically possible or not, and with Loris basically giving me one giant guitar string the tonal differences between them can be mitigated
obviously it's still a cool feature and a lot of producers will probably gravitate toward the familiarity of it
I guess my keyboard curve is setup differently to yours
my MIDI keyboard broke so I haven't even tested one with NB yet
@ospfeigrp said
https://pixeldrain.com/ is a good 3rd party way of sharing files or adding downloads without using the website hosting bandwidth.
Actually in this thread Aaron mentioned that because the files are so small, I can just use Github to host them for free and with thousands of file calls, if they catch onto me and suspend me or anything I'll definitely look into alternatives
There's also dropbox's direct download option but I think it's capped at a few thousand downloads a month, which might not work for free instruments since they do tend to go viral
-
RE: feedback needed :)
@DabDab said in feedback needed :):
More features are needed like Strumming specially for Guitar. And Guitar fretboard visualization is required. What chord and key is being pressed etc. Without that a guitar instrument is incomplete.
A note-delay based strumming feature probably isn't too difficult to implement so that's a strong "maybe",
I've never liked the fretboard interface (or string selection/tuning stuff either), in my experience the people that most often make use of those features (guitarists) would rather just record guitars, and most non-guitarists don't care
These instruments are really designed for a "draw in MIDI and hear a decent guitar on playback" type of thing, and the backend needs to stay as robust as possible to cater to percussion, piano, glocks etc so I'm not sure if I'll bother with the fretboard stuff
@d-healey said:
Nice. I'd like a velocity curve editor but that's just me.
Palm mutes use Low Velocities so it would probably need to be a stepped curve or something, but it's possible
How come your guitar goes down to C0?
That's Loris! There's only 3 strings in the recorded samples, and Loris fills in the gaps across the entire keyrange, I'm leaving it at C0 for anyone that wants to go crazy with the low-tuning, or do some whacky sound-design stuff with the ADSR
I have most of the writeup done regarding the technical side of things so that'll be up soon :)
Edit: thanks again both of you for the feedback and suggestions
-
feedback needed :)
hey everyone, i'm finally at the point where im ready to send these new VIs off to get feedback on, they're built for Rhapsody so you'll need that first if you don't have it:
https://librewave.com/rhapsody/
specifically i need to hear about:
- download / installation / loading issues
- performance (CPU & Mem) ideally in a full song context within a DAW
- overall tone / sound quality (clicking / popping, different sounds with different SR's etc)
- any dream features that wouldn't overly convolute the "vision" of a simple, lightweight, great sounding VI
the first two instruments are stereo aggressive DI guitars & aggressive picked DI bass, obviously intended for the angrier side of the music wheel :)
sidenote: this is the first place I'm asking for feedback, so if the website has issues or doesn't work properly please let me know
You can download them & listen to audio demos etc here:
https://iamlamprey.com/software/
Thanks in advance!
-
RE: Error loading ScriptNode Voice Killer
Okay following up here:
If I manually create a .js file that matches the name of the Voice Killer, I don't get the error.
BUT when I close HISE, it automatically deletes that file from the ScriptProcessors folder
@Christoph-Hart I think something's broken with:
XmlBackupFunctions::restoreAllScripts
regarding Scriptnode Voice Killer envelopes
-
Error loading ScriptNode Voice Killer
When re-opening a saved project with a SN Voice Killer Envelope, i get this error (and the envelope stops working):
HiseSnippet 803.3oc2VssaSCCF1dsdh1AHlDBj3pb4lzDpAFCj3hcnGlpFcqZoavcSdNtsVMwN33TnBw6FOB7nr2.v1oaIg0MlJLDPtnp+mr+7+gO6tRAgFGKj.XkdShn.3cQdS3pg0GhYbP6F.3xnwBFgNhEDPkJZrBryjHbbL0G.gk103FrRYf86rM2AGf4DZlJ.3XS3ugExTYZ6t0d50qE1m1iEly602pMQvqKBDIZHUBUCDgIivCn6iMts.B.WroOSIjdJrFL.X4cD9S7FJ9.O0+iYwrSCnFAWfmdgRU2RD3aPrQKn9PVfe2yO5w..D0MKQTJMQ7PTGlO6B8YIjGXM3jEQ97AbghvqTA34lGd0xAuY.IXNHUNERKi7HRVjJyhAOKgZyUTYerNsmGJo9BfeAUWncfqdZHdDskTKbQ.qrQsZq4n+Y0W2OgSTLA2Qv2WnnGvWY0pepZkpetpyOZpe+YZyrMRgoMYllMUZ40E3J7jvSox0bFiCRnW3n93WLmht5bZ9RNI8TmyQAuMmoNHhxupFAvzTk9eG0tAVgMEho5z9Eom.XFH.aPGq6pSKKUPMnwiThHce8kpY5tEgeR.VUrExL2L0fNGTntYJN7XlZR94pea8U2THtLpKSQFNaLtvLvnNScafwoSi2C0reeJQkAvxnVu61YzK+1e+zseIjGiSsLi1M+QVYm2hGSc1kxoRSxy8ZXF+5MkYL5FyLd.Qo29dRLONRDWXg8ngrdBtIKjo7nX8vO88GZpl40WWfkyzzYa1ReHmYLdXUhz1VrcnHgqJzFT55XncmSF5x+0wP+OvvdgKQVLEiOd5kHbgO01.tm8lcKTeRNaNViNoVK1Z2QvEQCEbFIec+PpRxFLfJyq7W874d6SlU5+axrCEIJFePGrt57Q8km6mD5o4PHT8ty4z.yEpvEL7.ox0LxojHbeqv2zeSM5ZjgSM5dtw+H6QHlHEmPRepfo29NVM5yM291vJnNFYGWf84C4qkgZ1fSHjhK0kB7YyafOedCb84MvWLuAtw7F3Km2.e0OOPCY81IJQX5nI.zoaS6aufvlbrtK2NQ.9NzKgtoF
-
RE: Broadcaster & CurveEQ Dropdown
@iamlamprey Okay there's already a method for it:
const bc = Engine.createBroadcaster({"component" : "myCoolFloatingTile", "event" : "All Callbacks"}); bc.attachToEqEvents("bc", ["BandAdded", "BandRemoved", "BandSelected"], {"id" : bc}); bc.addListener("", "update something", function(component, value) { Console.print("you just changed an EQ property!"); });
-
RE: Broadcaster & CurveEQ Dropdown
@Christoph-Hart what about things like
Delete Band
orDelete All Bands
? -
RE: Broadcaster & CurveEQ Dropdown
I have 2 Samplers that I want to share the same EQ Curve without using a SynthGroup / Routing Matrix
Broadcaster works fine for adding/moving nodes, but changing the type or deleting it doesn't update the other Sampler
-
Broadcaster & CurveEQ Dropdown
I'm super new to using Broadcasters, is there a way to grab the
event
for a CurveEQ FloatingTile from this menu?: