All About Latency
-
I have two HISE plugins — a virtual instrument, and an FX plugin. Could anyone please help me with Latency Reporting?
Goals
The latency delay I report to the DAW must be sample-accurate.
The duration of latency in my plugins incur doesn't matter. (In the best of all worlds, my latency is more than one second.)
Questions
- How do I determine what latency to report (i.e.
setLatencySamples
)? - What happens when my plugins' internal latency changes during playback? (Short answer, I gotta think in terms of maximum delays.)
- How do I account for the latency incurred by external factors, such as other plugins not reporting their latency correctly?
Current Strategy
- Based on the loaded Preset, I determine the maximum latency that my plugin could incur.
- Report that to the DAW (
setLatencySamples
). - Inside my plugin, I am constantly computing the actual latency.
- Using a final SimpleGain module on the Module Tree, I set its delay to [Time Reported to the DAW] - [My Plugin's Current Actual Latency].
But this is agonising, but sometimes I simply can't even figure out what the actual latency is…or even what that means.
So here's a little more information about the plugins, and hopefully there's a better way?
Latency Source. as a Passive Result of DSP
Compensate for Latency: Sometimes.
Latency Source: Anything in a Module Tree or ScriptNode Network (including Faust).I assume there's some latency as an (unintended) side-effect of my DSP. Convolution engines are a special case:
-For reverb, I don't need to account for that latency…reverb requires delay.- For transformations (e.g. microphone modelling), I do need to account for latency.
Users can optionally activate Modules and ScriptNode Networks—each of which may incur passive and/or explicit latency). So these latencies can vary during playback.
Latency as an Active Result of Audio Alignment
Compensate for Latency: Always
Latency Source: SimpleGain Modules, ScriptNode jDelays.My plugins align microphones on single instruments (e.g., top and bottom snare microphone), instruments (e.g., snare drum to overhead microphones), imaginary microphones, and ambience microphones. Fortunately, the delay times have all been previously computed, offline—they're recalled as part of the Preset.
Latency as an Active Result of Note Onset Scheduling
Compensate for Latency: Always
Latency Source:(int samplesToDelay)
in the real-time MIDI thread.
A quick explanation. In DAWs, virtual instruments translate the beginning of notes (i.e., note-ons) to the beginning of the audio file. But that's not how humans perceive notes…we experience them (always) as starting later. To implement note-onset scheduling, we need to begin playing notes before their timestamp. That's currently (generally) impossible. So, once again, we need to shift the entire DAW forward.
Latency as an Active Result of User Features
Compensate for Latency: Never
Latency Source: Anything in a Module Tree or ScriptNode Network (including Faust).Some plugin features incur a desirable delay. For example, the user can "…move the microphones further from the instruments in the original recordings."
So…I've dug myself quite a hole here.
️
Any ideas?
Cheers!
- How do I determine what latency to report (i.e.
-
@clevername27 said in All About Latency:
Questions
- How do I determine what latency to report (i.e.
setLatencySamples
)?
I see 2 ways:
- Direct measurements in DAW
- "in-plugin" measurements
Then make a latency array containing the value of the different scenarii you can refer to when reporting to DAW
I am making my table with "in-plugin" measurement that I later confirm in DAW (you save an export)
For this you can set a buffer with a peak sample that you play at the beginning of the chain and you record a second buffer at the end. Compare the two -> you have your latency.- What happens when my plugins' internal latency changes during playback? (Short answer, I gotta think in terms of maximum delays.)
That's doesn't seem advisable. I am not sure DAWs like such a fickle dude. I know it's possible for plugin to report a dynamic latency, and for some DAWS to accept it. But they don't all seem capable to adapt (some yes, some at the next buffer, some when after a stop...)
So my approach would be to account for the maximum delay to report (constantly), then compensate "in-plugin" by adding delays up to the difference. But someone else might have a better expertise on this...- How do I account for the latency incurred by external factors, such as other plugins not reporting their latency correctly?
Do you mean plugins playing on the same track/FX chain? They are not your concern... Or am I not understanding the question?
Current Strategy
- Inside my plugin, I am constantly computing the actual latency.
How you do that? summing pre-guessed value depending on the configuration/state of the different DSPs or actual real measurements? The second is less error prone, hence my question...
- Using a final SimpleGain module on the Module Tree, I set its delay to [Time Reported to the DAW] - [My Plugin's Current Actual Latency].
Yeah so that's what I was talking about 3 questions above... not sure it helps then...
But this is agonising, but sometimes I simply can't even figure out what the actual latency is…or even what that means.
That's where the question of computed values VS actual measurements makes sense
- How do I determine what latency to report (i.e.
-
I don't think you should be aiming at calculating your latency (and eventual PDC figure in the DAW) on a dynamic basis. I think you should be aiming for a static value that works for all potential use cases.
-
What happens when you report latency to a DAW depends on the DAW. Each one will behave differently.
With that in mind, you should absolutely NOT make that call if transport is playing. The TransportHandler can give you flags for when transport is playing and for when an offline render is taking place, and these are the two scenarios where you absolutely want to avoid making that call.
There's a broadcaster for when the sampling rate changes, so use that to make a new latency report.
Your reported latency for instruments should be such that the on-grid MIDI events produce notes that are rhythmically quantized. This will vary between different instruments. It's also a good idea to let the user disable this in case they want to perform their instruments, at which point they'll compensate for the attack while playing.
Your reported latency for DSP should be such that it doesn't change the timing of the audio. Use null-testing to find out the numbers if you don't have them from your DSP code.
-
@aaronventure @Orvillain @aaronventure Thank you for your replies, and reading my long-ass post.
-
What does it mean to manually measure the latency in the DAW when latency (itself) isn't fixed? I imagine most plugins are like this—depending on what features the user has selected, different DSP is engaged, and the latency may change.
-
Wouldn't slower computers have the potential for higher latency with the same plugin?
-
Why wouldn't it work for the plugin to measure its own latency? The user would stop the DAW transport, and click a button on the plugin. This triggers a sample that begins with the top of a square wave, and I measure the time elapsed (precisely) until it reaches the output buss. Except…how would I know (precisely) how much time had elapsed? It can measure within the buffer exactly when the sound arrived—but how do I know what transport position the buffer corresponds with? And…measuring that would, itself, incur latency—because I could only measure it after a minimum of one buffer had elapsed.
-
Am I correct that only possible chance is for me to report a long latency (like two seconds), and then continually try to predict my plugin's latency, continually subtract that value from the long latency, and then (continually) set a delay to that value at the end of my Module chain? But that wouldn't work while the transport is playing, because if you try to change that delay live, there is either a drop-out or dead space (and square waves galore). That means that I can't allow users to automate any parameter in my plugin—but that's also kinda of the whole idea of the plugin.
-
The only way I see any of this working is if the DAW measures plugin latency—that would be simple for everyone, and always accurate. Anything else seems like a crude guess that can never ensure sample-accuracy. Which means my plugins cannot work.
Thanks…I'm still lost here, but it's getting a little clearer.
Cheers.
-
-
@clevername27 said in All About Latency:
I imagine most plugins are like this—depending on what features the user has selected, different DSP is engaged, and the latency may change.
Yes
@clevername27 said in All About Latency:
Wouldn't slower computers have the potential for higher latency with the same plugin?
Do you mean that it could be useful for users with slower computers to have access to higher latency? If so, they already do via increased buffer size in their interface (unless you're on an M-series Mac).
@clevername27 said in All About Latency:
Why wouldn't it work for the plugin to measure its own latency?
I don't think it wouldn't work, I just don't know how exactly you would do it in HISE. You need to match/detect the start of your test signal (could be a simple impulse) at the output and measure how many samples have passed.
@clevername27 said in All About Latency:
Am I correct that only possible chance is for me to report a long latency (like two seconds), and then continually try to predict my plugin's latency, continually subtract that value from the long latency, and then (continually) set a delay to that value at the end of my Module chain? But that wouldn't work while the transport is playing, because if you try to change that delay live, there is either a drop-out or dead space (and square waves galore). That means that I can't allow users to automate any parameter in my plugin—but that's also kinda of the whole idea of the plugin.
If you want to allow the user to adjust the delay and still be on time, you need to also give them a look-ahead or negative delay control, the value of which you'll simply report to the DAW. This cannot be automatable and any changes made to it during transport playback should not trigger the method call (simply call it on transport stop).
@clevername27 said in All About Latency:
The only way I see any of this working is if the DAW measures plugin latency—that would be simple for everyone, and always accurate. Anything else seems like a crude guess that can never ensure sample-accuracy. Which means my plugins cannot work.
Yes but how? What would it be measuring? It's just signal processing, and you're processing that signal. How would it know at which point to say "ah, there it is! that's how much it's been delayed!"? If you have any sort of distortion or linear phase filtering that has pre-ringing, that's a part of the signal that you normally report for PDC. How would it know how much? If the filtering smears the transient, how would it know exactly what the creative intent behind the filtering/distortion is?
-
@ustk said in All About Latency:
Inside my plugin, I am constantly computing the actual latency.
How do you do that? summing pre-guessed value depending on the configuration/state of the different DSPs or actual real measurements? The second is less error prone, hence my question...
Thank you for the question. I'm calculating the latency, because I don't see how it possible to measure it—given what my plugin does, all I could measure would be one of an infinite number of possible states.
I'll talk to the VST team at Steinberg, see if they have any ideas, and report back.
-
@aaronventure said in All About Latency:
I imagine most plugins are like this—depending on what features the user has selected, different DSP is engaged, and the latency may change.
YesHow am I just learning this? 🫠 (Thank you.)
As I understand, then—it is simply not possible to know your plugin's latency.
Is the idea that if you reported your latency correctly, the scheduler would be sample-accurate? If so, given that it's impossible to measure a plugin's latency, how do we know if the scheduler works?
Do you mean that it could be useful for users with slower computers to have access to higher latency? If so, they already do via increased buffer size in their interface (unless you're on an M-series Mac).
I mean that a if plugin is performing a non-constant-time operation (O1 complexity), then the amount of time the DSP code takes to execute may vary, meaning the audio output may gets written later in the buffer (or the next buffer)?
At the same time, I don't understand the point of scheduling in a real-time thread…because
I don't think it wouldn't work, I just don't know how exactly you would do it in HISE. You need to match/detect the start of your test signal (could be a simple impulse) at the output and measure how many samples have passed.
I'm thinking it won't work, because of some quantum-like paradox. 🤪
-
How do measure the number of delayed samples in a system where you don't know the latency? There is no fixed frame of reference. That's like…a Relativity paradox.
-
How do you measure time in a system where taking the measurement takes time—and that measurement must be incorporated into the result? That's like…a quantum paradox.
@Christoph-Hart, do you please have any thoughts here? The issue isn't, "Are there cases where you can know/measure the latency". It's, "In every case, can you know/measure the latency?") I realise this is probably a plugin format issue, not HISE (or JUCE).
Bill
-
-
@clevername27 said in All About Latency:
Is the idea that if you reported your latency correctly, the scheduler would be sample-accurate? If so, given that it's impossible to measure a plugin's latency, how do we know if the scheduler works?
Yes. While playing back audio, a properly functioning DAW will take into account PDC for each track and then play back first the track with the most delay, then play others so that they're all matched up. In Reaper, you can view PDC for each track in the Performance Meter window. You can even see it change based on the HISE's method call to report the latency.
To check that it works, have a plugin that simply delays the audio for a given value. Create a knob, and add a callback that reports the latency based on the current knob value x current sampling rate, then set the delay. You can use the stock delay node in ScriptNode.
Now create a track, add an audio file, duplicate the track and put the plugin on one of the tracks and invert the phase on it. Now hit play and adjust your knob for the delay. No matter what you set it to, you should be getting a silence (except for a few moments while turning the knob during playback where there's a mismatch between the reported delay being adjusted by the DAW and the delay node reacting instantly).
@clevername27 said in All About Latency:
I mean that a if plugin is performing a non-constant-time operation (O1 complexity), then the amount of time the DSP code takes to execute may vary, meaning the audio output may gets written later in the buffer (or the next buffer)?
Aren't all realtime audio operations like that? That's what the buffer is for. Even if you're overflowing the buffer, you'll get audio breaking up but an offline render will still render the audio correctly in sync.