Sample.set weirdness
-
These all work as expected:
for (s in importedSample) { s.set(2, rootNote); // ROOT s.set(3, highKey); // HIGH s.set(5, lowVel); // VLOW s.set(6, highVel); // VHIGH }
But to get the Low Note, I had to use:
for (x = 3; x < 5; x++) { s.set(x, lowKey); }
Because using
4
(and basically every other number under the sun) to get the LowNote index didn't work for any sampleEdit: Look at the source, maybe something funky with the clipping?
void ModulatorSamplerSound::setSampleProperty(const Identifier& id, const var& newValue, bool useUndo) { if (id == SampleIds::GainTable || id == SampleIds::PitchTable || id == SampleIds::LowPassTable) { data.setProperty(id, newValue.toString(), useUndo ? undoManager : nullptr); return; } clipRangeProperties(id, newValue, useUndo); // here? jassert(!newValue.isString()); auto v = getPropertyRange(id).clipValue((int)newValue); // or here? data.setProperty(id, v, useUndo ? undoManager : nullptr); }
-
@iamlamprey use the built in constants - autocomplete is your friend
-
Looks like the range sanitizer is getting in your way. Make sure you get the order between setting the low and high key property right as there can‘t be an empty range and if you set the low key above the high key or vice versa it won‘t work.
-
@d-healey said in Sample.set weirdness:
@iamlamprey use the built in constants - autocomplete is your friend
You mean something like
s.set(s.get("LoKey"), value);
referring to
}namespace SampleIds { DECLARE_ID(Unused); DECLARE_ID(ID); DECLARE_ID(FileName); DECLARE_ID(Root); DECLARE_ID(HiKey); DECLARE_ID(LoKey); //etc
?
It wasn't working for me earlier but I probably did something wrong, I also find that
unused
token curious@Christoph-Hart said in Sample.set weirdness:
Looks like the range sanitizer is getting in your way. Make sure you get the order between setting the low and high key property right as there can‘t be an empty range and if you set the low key above the high key or vice versa it won‘t work.
Yeh I have the loop first then set the Root & HiKey manually and it seems to work fine now, just thought it was worth a mention. Pretty nifty to be able to resynthesize a full keyRange with Loris, then have that automatically imported back into the sampler as a sampleMap in a single function
The backend evolves!
@Christoph-Hart btw before I dive any further, is there any particular reason why I would want to avoid using the Sampler as a surrogate Wavetable Synth? My particular case needs Round Robin which is much cleaner with the Sampler (plus it's generally more stable)
-
@iamlamprey said in Sample.set weirdness:
You mean something like
No that's gibberish :p
I mean the built in constants:
-
for (s in importedSample) { s.set(Sampler.LoKey, lowKey); }
Does nothing for me, LoKey stays at C-2
Edit:
s.set(Sampler1.asSampler().LoKey, lowKey);
and
s.set(Sampler1.LoKey, lowKey);
and
Sampler1.asSampler().LoKey = lowKey;
also don't work
-
@iamlamprey I think Christoph's suggestion provides the solution. I was just trying to make your code more readable by removing the magic numbers.
@iamlamprey said in Sample.set weirdness:
Pretty nifty to be able to resynthesize a full keyRange
Are you doing this from a single sample? Does it sound realistic?
-
@iamlamprey said in Sample.set weirdness:
s.set(Sampler.LoKey, lowKey);
Console.print(Sampler.LoKey);
What do you see? -
-
@iamlamprey So that part's working.
-
@d-healey yeh but even using the strict int 4 wasn't, hence the need for the loop
It's probably just my HISE installation or something, my hacky workaround gets the job done anyway
-
@iamlamprey said in Sample.set weirdness:
It's probably just my HISE installation
Did you try what Christoph said and set the low note before you set the high note?
-
@d-healey yessir
@iamlamprey said in Sample.set weirdness:
Yeh I have the loop first then set the Root & HiKey manually and it seems to work fine now, just thought it was worth a mention.
-
@d-healey said in Sample.set weirdness:
Are you doing this from a single sample? Does it sound realistic?
Sorry not sure how I missed this, the current pipeline is:
- Record 15 RR's of a guitar sustain note
- Extract Residue w/ Loris, repitch the 15 RR's across the keyspan (sounds fine so far in my testing)
- Trim Resynthesized Samples (only need a small section = way less RAM)
- Dynamically build sampleMaps, set loop points to be exactly 1 cycle, based on resynthesized fundamental w/ no Fade Time
- Now the Sampler works like a Wavetable Synth, minus the WT Position which is basically a lowpass filter for guitars (ie Karplus-Strong method). The added benefit of the Sampler is that we get Round Robin
- Play the Residue (pick attack) sample back alongside the Tonal "Wavetable" & process further with things like the Scriptnode Palm Mute setup I posted a while back
It sounds pretty good so far, I'm confident I can get it sounding identical to the original Sample Library of the same guitar, but with 1/100th the file size, more parameters exposed to the end user to play with, and more realism (since I'm now able to 3x or more the RR's because the footprint is so tiny)
The key piece of the puzzle is the Loris library, since I get to be as lazy as I want and only record a single note and just have Loris handle the rest (although there are some tonal differences between strings)
I've also been using one of the Melda EQs to "model" the middle & neck pickups and applying that as a function, this can also be extended to capture other guitar bodies without needing to re-record any samples (you can also do it with AI but let's not unless we have to lol)
i'll do a full on writeup when I'm happy with the sound and pipeline
-
@iamlamprey said in Sample.set weirdness:
It sounds pretty good so far, I'm confident I can get it sounding identical to the original Sample Library of the same guitar, but with 1/100th the file size, more parameters exposed to the end user to play with, and more realism (since I'm now able to 3x or more the RR's because the footprint is so tiny)
So the approach we use if we want to reduce the memory/disk footprint and to have RR-like response is:
Think about RR's what are they actually doing - what is the difference between them? - The answer here is:
- Tuning - there are micro differences in tuning
- EQ - there are micro differences in eq too
- Timing - there are micro difference here as well.
Can you think of anything else that would be different if we compared RR recordings of the same note at the same velocity on the same instrument (be it drum, guitar piano, whatever)?
I cant think of anything but the very vague "Timbre" - and this essentially breaks down into "EQ over time" - but noticeable timbre changes is (generally speaking) not something you would hear between two given RRs...
So.... I built an "RR Engine" that varies 1 thru 3 above, giving me infinite RRs, and end user control over them. I'm sure you can work out how to build your own RR Engine from this.
-
@Lindon said in Sample.set weirdness:
Can you think of anything else that would be different if we compared RR recordings of the same note at the same velocity on the same instrument
The attack. I haven't been able to find a convincing way to generate fake RRs that have a unique attack.
We can shift the start position around and nudge the attack time but it's not convincing to me, it still sounds like the original sample.
I also use an infinite variation system similar to yours, but I can't get past the attack sounds off so I still record a bunch of RRs too.
-
@d-healey said in Sample.set weirdness:
@Lindon said in Sample.set weirdness:
Can you think of anything else that would be different if we compared RR recordings of the same note at the same velocity on the same instrument
The attack. I haven't been able to find a convincing way to generate fake RRs that have a unique attack.
We can shift the start position around and nudge the attack time but it's not convincing to me, it still sounds like the original sample.
I also use an infinite variation system similar to yours, but I can't get past the attacks sounds off so I still recording a bunch of RRs too.
yes - sorry forgot that - we vary the attack of the note too in the RR engine. Works ok here, but we are not usually doing "orchestral" instruments like you...so you may have more comprehensive needs.
-
I've implemented "fake RR" already which includes pitch & amplitude randomization per voice (very easy to do with Modulators), but the most realistic approach is still a form of sampling, let's look at a waveform:
Where Blue is the transient, green is the body and orange is whatever release sounds the particular instrument might have
As David said, the attack/transient sample is difficult to fake, and similarly so is the release sample, i just record these separately (pick attack and release noises for guitar)
the Body however is basically a repeating cycle that decays over time (either in Gain or High End, usually both)
If we can grab a single cycle, and loop it perfectly, the decay can be handled by HISE Modulators as well (Filter+AHDSR)
this is where Loris comes in, my backend lets me record a single pitch (15 RR's in this case), and resynthesize it across the keyrange. I can then truncate the samples since we don't need the full length, it looks something like:
Loris can isolate the partials (which means we remove the transient, which is why i record them separately, or you can phase-invert the partials to get the Residue), and because I resynthesized it with Loris, it SHOULD now have a predictable cycle length:
f0 = 440 // A above middle C sr = 44100 cycle = sr / f0 // repeat for each pitch
we can then set an arbitrary loopStart point (anywhere past the initial transient) and use our cycle length to get the loopEnd, as long as we have no fadeTime we SHOULD have a perfect looping sample (some of mine are buzzing so there's probably something more I need to do but it's not a huge deal), this can also be done programmatically to automatically create the sampleMaps for you
combining these "wavetables" alongside the recorded Transients results in a very convincing waveguide model, now I just setup the Modulators to filter/attenuate the loop so it decays naturally, this involves measuring the decay times of the real samples and such
because we've truncated the samples, we're using significantly less RAM AND we get all of the usual sampler benefits like Round-Robin and lower CPU usage than the Wavetable Synth
another added benefit is I can process the transients and body separately, so if there's some filtering needed I don't need to ruin the transients
tldr: use truncated, resynthesized versions of the original round-robins and loop them with a single cycle to emulate waveguide synthesis
-
@iamlamprey yes very impressive.
-
@iamlamprey so in theory then you should be able to separate out the attack transient part - into its own sampler, and leave the sustain and release in their own - and thus do that waveguide thing of combining different voice attacks with alternate sustain/release components, e.g. Trumpet attack with piano sustain and release...and even reasonably cross-fade the sustain parts between different sounds...e.g. Trumpet attack with piano sustain going into violin sustain