Smoother Dynamic Crossfades
-
Is there anything built in HISE that can help with the dreaded phasing associated with dynamic layer crossfading? So far I've tried increasing the smoothing value of the CC modulator and I've been playing around with LP filters but the results aren't satisfactory. I think this issue is inherent to all samplers but I was just wondering if HISE has any tricks up it's sleeves for dealing with this that I've overlooked?
-
HISE does not have any phase-alignment technology built in. I talked about this subject with Elan Hickler a while back who came up with a very neat algorithm that can phase lock almost any sample and I was planning to add some special features like correcting the start offset of a eg legato sample to the next cycle to allow seamless blending.
It got a bit quiet between us though and it's currently not on my priority list (also Elan is working on his own stuff), but if you poke him a bit maybe we can resurrect the project.
In the meantime, have you checked out the Harmonic Filter? It's far from actual phase-aligning, but it's basically a set of filters that is automatically tuned to the harmonics of each note. You can define two stages (A and B) and morph between them with any modulator. This can be used to recreate a different harmonic structure which can create some poor mans dynamic effect:
HiseSnippet 954.3oc6W0saSjCE1SRs1VAKKHpzhzdybAHsRULJSZn.RqD4mlTp1V5PmrcoWAty3zXvicjGOAx9Rf3IgWF1q1a3QXeC10d7LLSJgRSD+K7EQ47c74Le9brO9XOAO.GGyE.qy0exHLv57P+IL4vNCQDFX6MAVW.tKJVhE1Fn1SFghiwg.KqpaoArVYIP53euSaDEwBvEP.vAbR.dGRDQVf9nl+NgR6gBw8IQklcilaGvYc3TdhhOUg0.iPAOAcL9dH8zp.AiI3mFCrpAWuti3ItG8WsbTiaxQa8GC2.63L9tNN60ea4Zo3lQaGGPPhPfYxCTlCrfV+mZXA6FRjbguDIwJetTad3D+g7mxLe5CHwjinXsfKvWwICbONMTu30+GzYHgF5kGBiAJe6UDPqZBnqB2kDRdCdQf8hoJrKrnbn0pxoQO2xzq1Lom6LnGnD6VxvNEI3gITjbZloyrYJH7oS5ayjXVLQNobl+yJcuDziHCFNa9VYF7Ukn9Xy2rj+OB6NX.NPVP1kf8dvhlocWHpbICUNG7OQiwoGtSIxpoxC3hH6svLrPGSeWmteze+hW9g+z8dAREC5KPr3Q7XraYOeBc0KqaSrLgM8zMPSMK8pqmZ0UZdK2LGrdYPODaJmojmxS6RdVg3quiU0EJgM8QpRy+6kQdmkQpd1n6LhfVk33OjyQeRzHJtKaLlxUHlZGahGfRnx2f9dIYKoTcsTAxpM2GSwn3R60uZycHLLRnV63Eb84d1WemVN3JPCcs0GFs+1njd0u4Ko+dS4qjG6tKRDwYjfdDppGsrXWNncFZYtcujHCZaDKLtbYv62CEHScdFzu1zGqJsyYE0gKGv2AOP5SIgXgmZC1lHIBX8KP2MzMcMtktCrsDGROruG8PZMmNf8IGO7TLXuI6Lo6iOj139i0cs0pEnifGGOPcQxAHZR9m1l7p+4yv4nKCev002o8w5HzBtOXYCk+IXezQmnx1EMX1Kbgse64ucgsFMMSJ00lL3Mfqugyziw602qU207TYz06jJq9YvZZU21HCx7bI+bMX8FEt31mvtL+B9vk4qbFy7+bVEzutJfdEXdD9qJZqal4YeAP284IRB63cQRgt4Onppoupu0.rhpLFlFqeFYEf1zT4ZZYMg7wrvZ4uwLSoqV1JSoatxOIeiHTff+PUe2RAmp20ubJhZcyReS8JpG2qjscAiM0X0wlHU+jOLHPew10UQmYaQ841h0maKZL2Vbi41hMlaKt4bawsNEKzMu2JQxiL62Af+2AmnID
-
I had a little play with it but didn't really understand what I was doing. I'll try out your example and explore it some more :)
-
Do you think it would be possible to implement some kind of bit for bit morphing feature in a similar way to how image morphs work? And would this offer an smoother alternative to traditional crossfading techniques without needing the samples to be phase aligned?
-
I suppose samples should be phase-alignment even WITH such technology. Just for the performance. If You are remember AET (which sucks), it eats a lot)))
More than, we have more predictable behaviour, when know exactly when morph will come (I mean, the biggest problems are because of broken vibrato between samples). If samples are aligned in the host, and starts momentary, it produces the best fades. I've recorded 3 notes singed by my wife, and was lazy, while making attack samples. And here are lots of sounding problems now) -
I don't mean an EQ morph like the AET I mean a sonic audio morph:
http://www.zynaptiq.com/morph/
http://www.hakenaudio.com/RealTimeMorph/
http://www.cerlsoundgroup.org/Kelly/soundmorphing.html
https://www.dynamictonality.com/spectools.htm
http://www.tandfonline.com/doi/pdf/10.1080/23311835.2015.1102116 -
I suppose they all work using a FFT spectrum matching algorithm (which is basically the same thing as AET, but maybe with a better implementation)?
-
I'm not sure if that's how it's done with image or video morphs, they use interpolation. I suppose the equivalent in audio would be to take two samples and interpolate the frequency and amplitude over time creating a nice smooth morph rather than just a frequency or amplitude crossfade. I came across a really good example a while ago but typically I can't find it now!
Edit: Even more typically I found the link after making this post :) - http://cmc.music.columbia.edu/musicandcomputers/chapter5/05_06.php
-
Christoph, I just came across this. Could something like this be implemented in HISE to morph between dynamics - http://spectmorph.org/ ?
-
Sounds interesting, I'll take a look. I don't know it it is suitable for realtime processing, but maybe you can use this to convert your samples to multi-velocity wavetables and import them into the wavetable synth in HISE.
-
seems t be implemented in a VST plugin so it might work real-time.
-
Yes it looks like it's real time. If it isn't then the analysis must be done before hand so that it can be morphed in real time (which would also work for dynamics or other morphing effects). Sample libraries with morphing like this would be a game changer.
-
Another possibility - http://www.cerlsoundgroup.org/Kelly/soundmorphing.html