@d-healey this is the doc…
https://docs.hise.audio/scripting/scripting-api/midilist/index.html#setvalue
But in reality, I think it can store a 32bit integer
@d-healey this is the doc…
https://docs.hise.audio/scripting/scripting-api/midilist/index.html#setvalue
But in reality, I think it can store a 32bit integer
@d-healey I think I must have misunderstood the online docs (or they are out of date), as it appears to state the values must be between -128 and 128. But testing, it seems I can have 9 figure integers. Perhaps it means there are indexes between -128 and 128?
Perhaps missing something obvious, but how can I use MidiList to store times, when it only seems to be able to store integer values between -128 and 128?
@d-healey great! Just wanted to make sure we were talking about the same Timer.
Wonderful. That’s been super helpful, thanks! I’ll see what I can come up with using this method.
@d-healey ah, great. So is this something different from…
https://docs.hise.dev/hise-modules/midi-processors/list/scriptprocessor.html#the-ontimer-callback
Which says it is limited to 40ms…?
@d-healey okay, this makes sense. I thought I read something about the timer being limited to 40ms intervals, though? Not sure if this would have the resolution I need.
That aside, I have a follow up question regarding MidiList…
I’m logging quite a bit of information per note on event, and currently must admit to having them in an array, where each array index has an object. So I might set and get Middle C meta data at noteLog[60]["noteLength"] or noteLog[60]["noteOffTime"] etc.
Is there significant advantages to separating those all out into discrete MidiLists?
const var noteOffTime = Engine.newMidiList();
const var noteOnTime = Engine.newMidiList();
etc.
etc.
@d-healey in my case, I’m looking to analyse the amount of time between the last note off and the new note on. But let me see if I’ve got this correct…
I would set a timer (let’s say, 10ms intervals).
I log the time of the note off event. I log the time of the new note on event. Timer callback checks if logged new note on time minus logged note off time is less than 50ms. If yes, trigger Sample A, if no, trigger Sample B.
@d-healey great, so you understand what I’m trying to achieve… surprisingly tricky to actually put into plain English!
I suppose I was worried about having loads of timers pinging off. This is a piano library, so I’d need to set a timer on every single noteOff event, and it would need to be polyphonic! Should I be worried? What’s the most CPU friendly way of doing this?
@Lindon how would that work in practice, though? I can’t work out how to say “50ms has now passed and another noteOn event has not been triggered, so you can now go ahead and trigger sample B”.
Here’s two different scenarios with an event timeline…
Scenario A
0ms — NoteOn eventID1
250ms — NoteOff eventID1
270ms — NoteOn event ID2
At 0ms I trigger an attack sample (let’s call it attackSampleX). I don’t want anything to be triggered at that 250ms NoteOff event point. I want that initial NoteOn voice to keep holding. I want to wait for 50ms to see if we have another event incoming. As it happens, in this scenario A we do have another note on event, so at the same time (270ms) I trigger the new NoteOn attack sample (we can call that attackSampleY), I also want to trigger the appropriate “legato” release sample for the first note on event (ID1). Let’s call that “ReleaseSampleLegato”. I have managed to script this no problem using engineUptime, using Synth.playNote from the noteOn callback.
The issue is Scenario B
0ms — NoteOn eventID1
250ms — NoteOff eventID1
300ms — (there have been no new events)
Triggering AttackSampleX on 0ms. After that, as before, I don’t want anything to be triggered at that 250ms NoteOff event point. I want the voice to continue holding until 300ms in case there are any new noteOn events. In this Scenario B there was not, so I want to go ahead and trigger a normal release sample (say, “ReleaseSampleNorm”)
I can’t work out how to do this, and it’s driving me crazy 🤪
I am hitting a brick wall with the following...
For any given note's noteOff, I want to have a 50ms analysis window to allow me to determine which noteOff sample to trigger. If another noteOn (on the same midi note) is triggered within that 50ms, then we will trigger release sample A. If the 50ms window elapses without a repeated note, we will trigger sample B.
It seems like this should be straight forward enough, but I'm struggling to cime up with an elegant solution (in truth, I'm struggling to come up with any solution!).
I am reluctant to use a timer, as my instinct tells me this will put heavy strain on the CPU, but perhaps I'm wrong?
btw, more experimenting and sticking this...
Message.ignoreEvent(true);
...at the top of the noteOff function call, and then this...
Synth.addVolumeFade(eventId, fadeTime*1000, -99);
Synth.noteOffByEventId(eventId);
...does not throw an error. Seems a bit cludgy, though.
@d-healey my noteOn callback is empty, so the Sampler is simply triggering the mapped sample.
From what I can tell, the Event ID is the same for the noteOn as it is for the noteOff. That is to say, if I place the following in the onNoteOn() callback, also in the onNoteOff() callback...
Console.print(Message.getEventId())
...it returns the same ID for both.
This is my script for the attack samples...
function onNoteOff()
{
local eventId = Message.getEventId();
local noteNum = Message.getNoteNumber();
local noteLength = Globals.noteOnArray[noteNum]['noteLength'];
local fadeTime = .128;
if(noteLength <= .045) {
// it must have been shorter than or equal to 45ms
fadeTime = .01;
} else if (noteLength <= .16) {
// it must have been longer that 45ms, but shorter or equal to 160ms
fadeTime = (((noteLength - .045) / .115) * .1) + .01;
} else if (noteLength <= .45){
// it must have been longer than 160ms, but shorter or equal to 450ms
fadeTime = (((noteLength - .16) / .290) * .018) + .110;
}
Synth.addVolumeFade(eventId, fadeTime*1000, -100);
}
It functions well, but that last line generates an error in the console log...
"onNoteOff() - Line 25, column 21: NoteOn with ID1 wasn't found"
which seems to break the script and I can't add anthing after that line.
Any clues? The noteOff cancelling out the noteOn eventID?
@Christoph-Hart Is the Simple Envelope polyphonic? Also, am I able to modulate the release time? I was using the Table Envelope because it appeared there was AttackTime and ReleaseTime modulators.
Basically, for any given performance of a single note (this is a piano library) the logic I'm looking for is this...
User triggers a noteOn event, which triggers the noteOn sample
noteOff event triggers a bit of script which does the following calculations...
a) calculate length of note (we have been logging noteOn and noteOff timestamps)
b) we offset the start point of the release sample by length of the the held note
c) we adjust the Attack Time of the noteOff sample, which is calculated according to the note length
d) we adjust the Release Time of the noteOn sample, which is calculated according to the note length
we enter into the noteOn release phase, and we play the noteOff sample.
So, if I triggered 5 notes and then released them individually after holding them each for different lengths of time, they might all need to have different Attack Times and Release Times.
Perhaps I was doing something wrong, but when I was testing out the logic, it seemed like the release was being adjusted globally.
What would be the most efficient way of doing this? Last night I came to the idea of just using Synth.addVolumeFade(eventId, 50, -100); in the noteOff callback, and that seemed to work pretty well (or at least, it did actually do what I wanted and sounded correct, but I kept on getting an error message in the console telling me the noteOn with IDNNN didn't exist!).
Okay, silly mistake on my end (I had a detune on the noteOn callback).
So I can actually get these to play back as I want. However, Still some advice requested on how to set up the cross fade between samples.
As I said: on A1 noteOff, my idea is to have a 10ms linear fade out on the A1 sample and 10ms linear fade in the R1 sample. I'm currently doing this with the TableEnvelope, as it seems to be the best way I can get a linear fade out and fade in.
However, what if I wanted to adjust the curve (length and type) depending on how long the A1 sample was held for? Is TableEnvelope still the way to go? AHDSR envelope seems more flexible, but I can't see a way to adjust the release curve.
Any help with this much appreciated!
Douglas.
Something for which I don't seem to be able to find a simple solution...
I have an attack sample (let's call it A1), and a release sample (let's call it R1).
If I trigger A1 and release it, I want the R1 sample start point to be offset by the length of time A1 was held for. It needs to be sample accurate as the original samples are phase aligned.
What I'm doing at the moment is logging Engine.getUptime() when A1 is triggered, and then logging the same when it is released to calculate the note length (this gives me the note length in seconds). I'm then multiplying that number by sample rate (in my case 88200), to give me the sample offset value. I'm then using Synth.playNoteWithStartOffset to offset the R1 sample by that amount.
On A1 noteOff, my idea was to 10ms linear fade out the A1 sample and 10ms linear fade in the R1 sample. But it's not working. A1 and R1 are out of phase.
Any help with this much appreciated!
Douglas.
@Christoph-Hart said in setMultiGroupIndex vs. discrete Samplers:
Yeah one sampler is probably the way I would go here.
Nope.
For release samples you definitely want to use a separate sampler.
Yup.
The rule of thumb is: Use a dedicated sampler as soon as you would start treating groups with a different logic. If the hammer releases should behave differently than the normal ones (eg. the normal ones need to adjust their gain depending on how long the note was played, while the hammer sounds are always at the same volume), then these things should go into a dedicated sampler.
8 Samplers is absolutely no problem and if you follow the advice of @aaronventure and set the voice limit of every instance to the absolute minimum that you can tolerate, then there is almost no overhead in memory footprint as compared to using one sampler.
Thanks for the input Chris. Your rule of thumb is where I was intuitively heading, so good to have it endorsed.
@d-healey said in setMultiGroupIndex vs. discrete Samplers:
For release samples you definitely want to use a separate sampler.
Groups are just for organising samples, they have no module chain of their own. If you need to apply different modulators/effects to different sets of samples then you need multiple samplers.
Yeah, this is what I suspected, but I wanted to make sure that I wasn't missing something obvious.
@tonewolf said in setMultiGroupIndex vs. discrete Samplers:
But, to repeat... I am very new to this,
I would start with a few simpler projects before diving into your big one. You'll probably save yourself some time in the long run.
Yes, I hear you. I suppose I am kind of doing that, building little components of the instrument for myself as I go along. And I do have these piano samples just sitting here to be messed around with :)
Thanks for the feedback. In case you're wondering why I'm asking such newbie questions, for context: I do have a lot of experience in sampling and was behind the ns_kit7 drum sample library (if any of you remember that https://douglaswhates.com/work/ns_kit7). And then a few years back I recorded a vast piano sample library which has been on hiatus and mostly edited but as yet unmapped. Since then HISE has come on the scene and it will allow me to overcome some issues I was having with Kontakt (until this stage I have relied on HALion and latterly Kontakt with very my limited scripting ability). HISE has got me super-excited because I am adept at JS through web development work, so I'm finding the scripting in relatively straight-forward. But, having developed huge sample libraries in the past, I'm just making sure I'm not going to deep down the wrong rabbit hole!
So, yeah... this project I want to use HISE for is a super-detailed piano library with over 25,000 samples. It may not be necessary to use multiple Samplers. I'm still at the getting-used-to-HISE stage of things and very much leaning on this community of experts here (as well as your great videos @d.healey).
In a way, some of the articulations I am using are like separate instruments, even if they combine to create one sound. Without giving too much away at this stage, I have several different components which would be triggered on a note-on event, all with varying degrees of delay. The release samples, in particular, are pretty comprehensive. Any given note-off event would be selecting from potentially a dozen different release samples, all of which would have separate attack curves depending on a) note-on velocity, b) note length, and c) note number (pitch).
I'm just kind of improvising with HISE at this stage and getting a feel for it, but what I've been liking is having big groups of samples grouped into separate Samplers, as it's allowing an easy overview of what's going on. I find using the RR groups for layers a bit awkward.
Also, it is unclear to me at this stage whether different Groups (RR) within a Sampler can have different envelopes applied to them. It seems to be a thing which is applied to the whole Sampler module. Having stuff in separate Samplers, I'm liking the simplicity (from the developer's point of view) of inserting those attack envelopes and using velocity and note number to modulate them.
But, to repeat... I am very new to this, which is why perhaps some workflow and HISE logic is not completely intuitive to me yet.
@aaronventure Thanks for this. So, all things being equal, is a Sampler just as RAM efficient as setMultiGroupIndex()? In other words, setMultiGroupIndex() would only be more efficient if I wasn't mindful of Sampler voice count?
I suppose my (perhaps misguided) concerns was because I came across a couple of posts here saying that with 32 Samplers the engine started to struggle. Perhaps that was only to do with not managing voice count?