Message.delayEvent() - 100% reliable and sample accurate?
-
Hey folks!
After several years out of the sample library development game, I'm super, super excited to have discovered HISE. I've managed to compile a build and have been enjoying getting a feel for the app. As an independent developer it feels really, really nice finally to have a valid alternative to Kontakt.
A question to kick things off around Message.delayEvent()
Can I rely on Message.delayEvent() for triggering two samples back to back. Say my samples are at 88200kHz. And let's say I have one sample which lasts 88200 samples, and another which I want to start seamlessly after that:
function onNoteOn() { Message.delayEvent(88200); }
Can I rely on this functionality? I have a set of samples which I would like to have shared "tails". So, for example, let's say I hit a MIDI key, it would play sample A, and if you continue holding the sample past the 1 second mark, it would trigger sample B.
And a couple of related questions... does this sample delay add to the polyphony count? Or can we safely assume that, because those two samples are played back-to-back but never overlapping it would use just one polyphonic point.
And further to that, if I were to release the key after, say, 80000 samples, is there a way to cancel this delay event? If so, how would I go about doing it?
I'll leave the questions there for now.
Thanks again for this programme. Just barely scratching the surface and already I'm bowled over by the possibilities!
Douglas.
-
@elswhrco yes.
It will delay the execution of the message itself (the stuff in the next on note) by that amount.
So if you now want things to happen with the delayed note, add another processor as a child (likely a container or a sampler) and in its midi processor, do what you have to.
So, first midi processor just serves to delay the notes, its child will get the delayed note ons that you can then work on.
Just remember, it needs to be in a child processor, not have it under the same processor.
-
Good to know, thanks.
What got me wondering about how much I could rely on this is the following paragraph from the HISE online docs...
Compared to compiled C++ code, a Javascript engine is rather slow. But for occasional event handling like incoming MIDI data or a timer every 50 milliseconds the JS-engine is absolutely fit for the purpose. This is completely sufficent for the most use cases of audio plugins and virtual instruments. If you don't want to relinquish the performance gains of a native C++ implementation please consider taking a look at C++ API
But perhaps I'm misunderstanding how the script processing works? The above paragraph would seem to suggest thatI might have to take into consideration potential timing inaccuracy/margin of 50ms.
I have a piano sample library in mind. Every note which is struck would need to log the start time (I guess Message.getTimestamp() ?) of the noteOn event, and then the timestamp of the noteOff event to determine how long the note was, and therefore which release sample to trigger (I assume this would be more efficient than starting a timer for every note). Am I correct in thinking that the timing accuracy of these Message.getTimestamp() messages would be dependent on client-side processing speed? And, assuming yes, wouldn't that also be the case for the Message.delayEvent()?
But... as I said... I'm very new to HISE. I'll be testing this all out in the app, but hope some of you can give some clarity on what's going on under the hood. For example, I seem to remember Kontakt only allowed delaying samples by microseconds (not samples) which caused problems when you needed sample accurate playback.
-
@elswhrco the buffer concept exists to mitigate CPU speed differences and trade momentary execution for stability and consistency, but with some latency.
If you pack your production full of effects, turn oversampling to max etc and your audio render spits out the song at 0.3x speed (taking 3x as long to render as it does to play through), the audio still plays at normal speed when you play it back.
All the plugins running, all the VIs, their timers have no issues with this. It's all sample-accurate. It has to be, after all, that's the core concept of audio software.
In HISE, Engine.getUptime() is 100% realtime correct and reliable for timing purposes. delayEvent() is as well.
So, you can mark the timing using Engine.getUptime() for individual notes, mark it again on note off, do the calculation and then play the correct release sample or do whatever you want with the result.
You can probably do it with getTimestamp() too, but I haven't used that method, so you'll have some experimenting to do there.
The paragraph you're referring to talks about using the timer callback to execute code in precise intervals, which I don't think you need for your piano library (or, at least, not in the example you've given.
-
@aaronventure Massively appreciate your help and comments here. And yes, I ses now how I've misconstrued that paragraph. So I guess it's sort of saying... "if you start setting off dozens of timers every 1ms you might have issues, but for most tasks you'll be fine."
The sample accuracy thing now makes sense and a lightbulb went off in my head when you talked about buffering, etc.
I've done some tests with delayEvent() and it works really well, and also used the Engine.getUptime() to do some basic release sample stuff and, yeah... all good.
Excited!
-
@elswhrco the timer paragraph talks about how HISEScript isn't super efficient, so if you need a timer that executes more than once every 20ms or so, it's gonna be problematic.
it's referring to the on timer callback, which is a realtime, sample accurate timer that you start with Synth.startTimer(seconds).
It's a highly situational scenario in which you will need 100% accuracy in repetitive code execution, because it's mostly gonna be event-related code, as DSP isn't going to be done there.
For DSP, you should be doing it in ScriptNode/Faust/SNEX/RNBO anyway, as that works at C++ speeds.
-
@aaronventure said in Message.delayEvent() - 100% reliable and sample accurate?:
Just remember, it needs to be in a child processor, not have it under the same processor.
Can you pls expand on this?