Third party node modulation output slot
-
@HISEnberg Oh mate I think this is exactly what I need! I didn't see the
isModNode
existence, shame on me...So this is necessarily paired with the display buffer?
Not a huge deal but it'd less ugly if I only have the cable connector since the value I'm sending is a latency compensation of my DSP to surrounding nodes... -
@ustk sure you can just use the mod output. Just make sure to
// if you define this, then the mod dragger will appear in your node UI. static constexpr bool isModNode() { return true; }; // the latency is probably a non-normalised value static constexpr bool isNormalisedModulation() { return false; }; // Let's just send it once bool initialised = false; bool handleModulation(double& v) { if(!initialised) { v = calculateYourLatency(); initialised = true; return true; else return false; }
-
@Christoph-Hart Yeah I ended up trying just that end it worked at first try! Amaaaazing guys
-
@ustk No not all you can build it without the display buffer, that is just an option I included. Here is the most bare bones example I can think of. This should work in Scriptnode:
// ==================================| Third Party Node Template |================================== #pragma once #include <JuceHeader.h> namespace project { using namespace juce; using namespace hise; using namespace scriptnode; // ==========================| The node class with all required callbacks |========================== template <int NV> struct mod_example: public data::base { // Metadata Definitions ------------------------------------------------------------------------ SNEX_NODE(mod_example); struct MetadataClass { SN_NODE_ID("mod_example"); }; // set to true if you want this node to have a modulation dragger static constexpr bool isModNode() { return true; }; // SET TO TRUE! static constexpr bool isPolyphonic() { return NV > 1; }; // set to true if your node produces a tail static constexpr bool hasTail() { return false; }; // set to true if your doesn't generate sound from silence and can be suspended when the input signal is silent static constexpr bool isSuspendedOnSilence() { return false; }; // Undefine this method if you want a dynamic channel count static constexpr int getFixChannelAmount() { return 2; }; // Define the amount and types of external data slots you want to use static constexpr int NumTables = 0; static constexpr int NumSliderPacks = 0; static constexpr int NumAudioFiles = 0; static constexpr int NumFilters = 0; static constexpr int NumDisplayBuffers = 0; // Scriptnode Callbacks ------------------------------------------------------------------------ float modValue = 0.0f; // STORE MOD VALUE void prepare(PrepareSpecs specs) { } void reset() { modValue = 0.0f; // RESET MOD VALUE } void handleHiseEvent(HiseEvent& e) { } template <typename T> void process(T& data) { static constexpr int NumChannels = getFixChannelAmount(); // Cast the dynamic channel data to a fixed channel amount auto& fixData = data.template as<ProcessData<NumChannels>>(); // Create a FrameProcessor object auto fd = fixData.toFrameData(); while(fd.next()) { // Forward to frame processing processFrame(fd.toSpan()); } } template <typename T> void processFrame(T& data) { // EXAMPLE, JUST CLACULATE THE PEAK-STEREO SIGNAL float left = std::abs(data[0]); float right = std::abs(data[1]); float peak = std::max(left, right); // STORE FOR MOD OUTPUT modValue = std::min(peak, 1.0f); } int handleModulation(double& value) { value = (double)modValue; // OUTPUT MOD VALUE return 1; //SET THIS TO 1 TO SEND THE VALUE OUT TO HISE!!!!! } void setExternalData(const ExternalData& data, int index) { } // Parameter Functions ------------------------------------------------------------------------- template <int P> void setParameter(double v) { } void createParameters(ParameterDataList& data) { } }; }
-
@HISEnberg Yeah perfect that is what I did, just the mod output...
But you did well teaching me with the display buffer as well because I will soon need it!
Thanks a lot! -
U ustk marked this topic as a question
-
U ustk has marked this topic as solved
-
@ustk Sending the latency out is a pretty interesting idea. I'm currently trying to figure out how to do this in one of my plugins that needs have 0 latency between the dry and wet. Is the idea to offset the dry signal by an equal amount of latency as is introduced by the wet signal?
I am really trying to dig into this topic and the best solutions for it. One of the biggest problems I am experiencing lately is fractional sample latency (I think the grown ups call this phase offset
).
-
@HISEnberg Yeah this is exactly what I am doing, sending my node's latency to the dry signal through a SampleToMs converter connected to a delay.
My DSP throws out the latency in round samples so I don't know about the fractional delay since I just directly convert it to ms.In your case, you might be able to deal with the fractional delay if you directly send it in ms to a Juce delay like lagrange or thiran. They are meant to deal with fractional delay unless I'm wrong...
-
@ustk nice I basically setup the same system but just using HISE’s version of get/setLatencyInSamples. I’m assuming you are using the JUCE version of this in the C++ node? I believe HISEs API is exactly the same but I could be wrong.
https://docs.juce.com/master/classjuce_1_1AudioProcessor.html#a21b7f750657b54b09066bcbf3264506dUnfortunately I can’t find any information about how to get a fractional latency sample, which kind of makes sense all things considered. There’s not many forum posts about this topic either, much to my surprise. Maybe @griffinboy has some ideas since I know he often sketches out his algorithms in MATLAB or Python, they maybe offer more robust ways of addressing this. I’ve just been eyeballing my results in my DAW and using Plugin Doctor but there must be better ways to dealing with this!
-
I'm pretty sure that DAWs will not respect your fractional sample value - they just use it to offset the buffers they send you and I don't think any DAW will interpolate this.
The fact that the JUCE method's signature is
void juce::AudioProcessor::setLatencySamples(int newLatency) ^^^
should be a good hint about the industry practice here.
-
Yeah if you're talking about phase alignment I do it all the time.
Latency compensation is always in whole samples though, but there's no reason you can't phase align things fractionally by introducing more latency to either reach a whole number and then report that whole number to the daw, or in the cases where you're in control of both things that you want to align, just fractionally delay one of those things until it matches the other.
A daw won't respect fractional latency though if you're trying to get the daw to compensate. That's why lots of plugins have internal mixers so that they can phase align dry and wet for example (by delaying the dry path so that it will be in phase with the wet).
-
@HISEnberg said in Third party node modulation output slot:
@ustk nice I basically setup the same system but just using HISE’s version of get/setLatencyInSamples. I’m assuming you are using the JUCE version of this in the C++ node? I believe HISEs API is exactly the same but I could be wrong.
Well my use case is different, I just compute my inner DSP latency for dry/mix situation in a split node, not reporting the whole latency of my project to the DAW...
Regarding what @Christoph-Hart and @griffinboy are saying, could oversampling help here?
Despite the fact it'll eat up some more CPU, of course...