Ring Buffer design
-
Here's another example:
Wrapping is now done with a single bitmask, no floor or % calls.
So your assumption is that if you know that the limit is a power of two you can skip the expensive modulo call (which is the same as a division) and resort to bit fiddling
(a & (LIMIT - 1)
. This is theoretically true, but this example shows you that the compiler is as smart as you so there's no need to do this yourselfI admit I'm using this too a lot in the HISE codebase and it's not per se "bad coding style" (as it still speeds up the debug builds), but you don't leave performance on the table by not doing so - in fact the
index::wrap
function just uses the modulo operator and since you can give it a compile time upper limit it will benefit from this optimization for power-of two buffer sizes too.So the code example calls the default modulo operator, but one time with a unknown variable and once with a guaranteed power of two. The first function call results in:
idiv r8d
which is bad (idiv is division so slow). The second one results in:
dec ecx or ecx, -32 ; ffffffffffffffe0H inc ecx
which is more or less the bit fiddling formula above (I wouldn't try to understand what it does exactly, but a simple look at the instruction names hint at some very lightweight instructions).
-
Thanks for all this, it's super interesting.
I'm afraid I've turned this thread into an optimisation discussion, my fault whoops
-
@griffinboy all the responsibility is mine, not even contributing... But hell I'm glad I've done this lol
learning a lot
-
Nice additions!
So the reason I started learning about bitwise operations is because eventually I'll want to run the same code (or at least port it over) to the DaisySeed platform, so performance is reasonably important. But I've not even really worried about trying to squeeze stuff into fixed point math.
I'm definitely curious though!
@griffinboy no worries at all!
-
@ustk As I understand it, SNEX is a subset of JUCE and c++ and doesn't have the full capabilities. I have used it for some simple waveshaping and I did also write a sample playback node in it, but I hit some snags that at the time I wasn't able to work around. But probably just a knowledge hole that hopefully by now I've filled!!
The last thing I wrote in it was a sample+hold effect.
-
@Christoph-Hart said in Ring Buffer design:
TLDR: Start with the implementation that is the easiest to understand / write / debug, then profile or put isolated pieces into Godbolt to see what can be improved.
This was my thought too - it isn't any good writing code if I come back to it a week later and can barely understand it. So I try to approach things in the easiest to understand way first, even if it is a bit knuckleheaded. You can always optimize the hell out of a for loop or array access, later on down the line.
-
@griffinboy said in Ring Buffer design:
I don't use cubic interpolators on my delay lines, I use FIR filters (using SIMD) using impulses which are equivalent to windowed sync interpolation.
Hey thanks dude for this. I went off on a ChatGPT research mission, and I think I'm starting to understand it. I've got it implemented anyway, it seems to work.
Here's how I'm understanding it, again with a bit of help!!!
....
Sinc uses a band-limited approach to reconstruct the original continuous signal. But computing a fresh sinc for every fractional position would be super expensive. So we precompute lots of sinc kernels and whack 'em in a table.
The table has a bunch of rows - phases.
Each row contains taps - the weights that nearby samples get multiplied by. The more taps you have gives you a better frequency response, but more CPU usage.Each row is then a windowed sinc where 'k' is the taps integer offset from the centre sample, 'f' is the fractional position. You take a window function like Hann or Blackman-Harris, and that gentle fades the ends so that the limited sinc doesn't pre-ring.
After building the row, normalize it so it sums to 1. This keeps DC level stable.
Store all rows flat in a coeffs array, with some helpers to read the correct row - rowNearest and rowBlended.
Nearest gives you the closest phase row for the given fraction.
Blended gives the two neighbouring rows plus a blend factor, so you can crossfade across the two.Then inside my RingDelay, my new 'getReadPos' function gives an integer index and a fractional part. This can be used with any of my interpolation methods (nearest, linear, cubic, lagrange3, and now sinc)
readSinc uses nearest. readSincBlended uses blended.
For each tap, lookup the neighbouring samples around i1, and multiply by the weights. Sum them up, and that gives the interpolated value.
So Cubic or Lagrange - uses 4 samples, and looks smooth. But is not truly band limited. It rolls off highs in the pass band, so things sound slightly dull. Stop band rejection is not great, so it can alias or cause whistle tones.
Windowed Sinc - approximates the ideal reconstruction. Much flatter passband so it keeps brightness. Much stronger stopband attenuation, so there is less aliasing. It is also linear phase, so transients don't get skewed. When you sweep the delay time, it stays clean and consistent. But it costs more CPU.
-
That's exactly it.
And the cool thing about using a fir interpolator like this with multiple taps, is that you can SIMD it
Process 4 taps in one SIMD instruction cut CPU significantly.Hise has XSIMD included.
You don't need to include it you can just use it in a c++ nodeOh but it has latency potentially to use this technique. So it's not good for certain uses where it's too fiddly to compensate latency in different places.
But it's good for delay effects and for reading samples from a buffer at a new speed. -
So yeah, what I've implemented in my ring delay are the following:
Read Nearest: uses 1 sample. It is the worst quality in terms of aliasing and modulation smoothness. But the lowest CPU usage.
Read Linear: Uses 2 samples. It has audible HF loss, the aliasing is poor, but the modulation is a level above nearest. CPU usage is still low.
Read Cubic: Uses 4 samples. Somewhat noticeable HF roll-off, aliasing performance is fair, and the modulation is surprisingly smooth. Medium CPU usage.
Read Lagrange3: Uses 4 samples. Better HF retention than cubic, but still rolls off. Aliasing rejection is okay, but does let some junk through. Smooth modulation. Medium CPU usage.
Read Sinc: Has 16 taps, which means it takes 16 input samples around the read point and does 16 FMA's to produce one output sample. The passband is very flat, no HF loss. Aliasing performance is excellent. Modulation is excellent. CPU is relatively high - but here on my machine never spiked above 1%.
Read Sinc Blended: Also takes 16 input samples, but because we have a blend it performs 32 FMA's to produce one output sample. The passband is very flat, no HF loss. Aliasing performance is excellent. Modulation is excellent. CPU is relatively high - but here on my machine never spiked above 2%.It will be interesting when I get this stuff onto an embedded platform like the DaisySeed (ARM Cortex based) and then I can see how it all works out CPU-wise.
-
@griffinboy said in Ring Buffer design:
That's exactly it.
And the cool thing about using a fir interpolator like this with multiple taps, is that you can SIMD it
Process 4 taps in one SIMD instruction cut CPU significantly.Hise has XSIMD included.
You don't need to include it you can just use it in a c++ nodeOh but it has latency potentially to use this technique. So it's not good for certain uses where it's too fiddly to compensate latency in different places.
But it's good for delay effects and for reading samples from a buffer at a new speed.Indeed! Indeed! I haven't got to any of this yet. Just using basic c++ loops without any SIMD or vectorization.
I'm looking at this purely for chorus or modulation effects, delays, reverbs, and sample playback.
The reason I've implemented all of these is because I want to experiment with 80's, 90's, 2000's, and modern delay and reverb approaches, and I want to embrace the "shittiness" of certain approaches, as long as they add to or enhance the character.
Pretty archetypal example for me. I've got a £100 Boss RV-5 pedal here, and a £700 Meris MercuryX pedal. Both outstanding reverbs. The RV-5 is clearly less clever under the hood. You can hear the delays in the FDN every now and then, and it doesn't sound super high fidelity. But bloody hell is it cool!!! Way better than later Boss reverbs IMHO.
-
I've never used sync / fir interpolators in a reverb.
I'm not sure how high the benefit would be.
For delays and synthesis it's great though when you need low aliasing and low ripple. -
@griffinboy said in Ring Buffer design:
I've never used sync / fir interpolators in a reverb.
I'm not sure how high the benefit would be.
For delays and synthesis it's great though when you need low aliasing and low ripple.A typical approach to decorrelating reverbs is to mildly modulate delay times in the FDN. But you can also do it on the input diffuser or the early reflections APF network too. So anything that affects delay time is ultimately going to make some kind of difference.
It'd be interesting to compare them all. Maybe that's a new thread in a few weeks or so!