@d-healey I really like that UI. Very simple, accessible, and smooth looking - for lack of a better word!
Best posts made by Orvillain
-
RE: Need filmstrip animations
-
RE: Orv's ScriptNode+SNEX Journey
Lesson 5 - SNEX code in a bit more detail.
So I'm by no means an expert in C or C++ - in fact I only just recently started learning it. But here's what I've sussed out in regards to the HISE template.... and template is exactly the right word, because the first line is:
template <int NV> struct audio_loader
Somewhere under the hood, HISE must be setup to send in an integer into any SNEX node, that integer corresponding to a voice. NV = new voice perhaps, or number of voices ????
The line above declares a template that takes this NV integer in, and creates a struct called audio_loader for each instance of NV. Indeed we can prove this by running the following code:
template <int NV> struct audio_loader { SNEX_NODE(audio_loader); ExternalData data; double note = 0.0; // Initialise the processing specs here void prepare(PrepareSpecs ps) { } // Reset the processing pipeline here void reset() { } // Process the signal here template <typename ProcessDataType> void process(ProcessDataType& data) { } // Process the signal as frame here template <int C> void processFrame(span<float, C>& data) { } // Process the MIDI events here void handleHiseEvent(HiseEvent& e) { double note = e.getNoteNumber(); Console.print(note); } // Use this function to setup the external data void setExternalData(const ExternalData& d, int index) { data = d; } // Set the parameters here template <int P> void setParameter(double v) { } };
There are only three things happening here:
- We set the ExternalData as in a previous post.
- We establish a variable with the datatype of double called 'note' and we initialise it as 0.0. But this value will never hold because....
- In the handleHiseEvent() method, we use e.getNoteNumber() and we assign this to the note variable. We then print the note variable out inside of the handleHiseEvent() method.
Now when we run this script, any time we play a midi note, the console will show us the note number that we pressed. This is even true if you play chords, or in a scenario where no note off events occur.
That's a long winded way of saying that a SNEX node is run for each active voice; at least when it is within a ScriptNode Synthesiser dsp network.
The next line in the script after the template is established is:
SNEX_NODE(audio_loader);
This is pretty straight forward. The text you pass here has to match the name of the script loaded inside your SNEX node - not the name of the SNEX node itself.
Here you can see my SNEX node is just called: snex_node.
But the script loaded into it is called audio_loader, and so the reference to SNEX_NODE inside the script has to also reference audio_loader.
-
RE: Orv's ScriptNode+SNEX Journey
Lesson 4 - SNEX Node layout.
I'm still wrapping my head around how the SNEX node works.
The first thing to note is, SNEX code does not support strings. The documentation for HISE does make this clear, but if you haven't seen it yet... then I've told you again! Here's the docs link:
https://docs.hise.audio/scriptnode/manual/snex.html#getting-startedAs the docs say:
The Scriptnode Expression Language (SNEX ) is a simplified subset of the C language family and is used throughout scriptnode for customization behaviour.Which means that most of the syntax you're used to when writing interface scripts, is just not going to be the same. There are some overlaps however - Console.print() is still used in SNEX scripts. However, print messages only get logged to the console when you put the SNEX node into debug mode. Which you can do by clicking this button:
From what I can tell, by default we have the following methods:
- prepare
- reset
- process
- processFrame
- handleHiseEvent
- setExternalData
- setParameter
Each one of these methods has a purpose. I'm still experimenting to figure out what those are, but here's what I've come up with so far:
- prepare
This is called when you compile or initialise your SNEX node, and it seems to run for each audio channel. I would guess this is meant to setup global parameters like sample rate and block size. Things that do not change from voice to voice. - reset
This is called when you trigger a voice, in my case from midi. When using a ScriptNode Synthesiser, the midi passes into the node automatically. This is where you would initialise variables that can hold different values from voice to voice, but that must start out with the same default value each time. - process
Haven't quite figured this one out yet. - processFrame
Haven't quite figured this one out yet. - handleHiseEvent
This is called when you trigger a HiseEvent - typically a midi event. This is where you would parse out your midi notes, velocities, controllers, and program changes; any midi data really. - setExternalData
This is called whenever there is a change to the external data. In our case, that would be the AudioFile we added in previous steps. So for example if you went to the complex data editor for the External AudioFile Slot (in the node editor) and loaded a new file, this method would get called. This is where you would resize any arrays that you're using to store the sample data, for example. - setParameter
This is called whenever a parameter inside the SNEX node is adjusted. You can parse the parameters out by using if statements and checking P against 0, 1, 2, 3, etc, depending on how many parameters you actually have.
SNEX has some hard-coded variable names, most of which I don't know yet. But a valuable one is "ExternalData". Consider this code:
template <int NV> struct audio_loader { SNEX_NODE(audio_loader); ExternalData data; // Initialise the processing specs here void prepare(PrepareSpecs ps) { } // Reset the processing pipeline here void reset() { } // Process the signal here template <typename ProcessDataType> void process(ProcessDataType& data) { } // Process the signal as frame here template <int C> void processFrame(span<float, C>& data) { } // Process the MIDI events here void handleHiseEvent(HiseEvent& e) { } // Use this function to setup the external data void setExternalData(const ExternalData& d, int index) { data = d; } // Set the parameters here template <int P> void setParameter(double v) { } };
Most of it doesn't do anything. But we have established that ExternalData is linked to a variable called data. We can also see this in the data table view:
Notice how ExternalData is a Data Type, and it is named data. Also notice how it has a variety of sub attributes - dataType, numSamples, numChannels, etc.
Let's swap out the file loaded in the AudioFile editor:
Notice how numSamples has updated, and also numChannels.
Back to the code:
// Use this function to setup the external data void setExternalData(const ExternalData& d, int index) { data = d; } // Set the parameters here template <int P> void setParameter(double v) { }
The data variable we established as ExternalData at the top of the script, is now actually having the data pushed into it by the setExternalData method - which has two inputs; "d" and "index".
This shows the very very basics of getting sample data into a SNEX script. But we're still not doing anything with it yet.
-
RE: Orv's ScriptNode+SNEX Journey
@d-healey I don't mean to be rude, but please don't distract from the purpose of this thread. Beautiful code and efficient code isn't the point here.
The point is to demonstrate how the API works, and for there to be a resource for people who come along in the future looking to do sample loading from their scripts, and looking to do advanced things in ScriptNode or SNEX.
I know full well that in a real world scenario, you wouldn't specify a bunch of files each as an individual const, and you'd put them in a key value pair inside of an array, or perhaps a function acting as a meta-object.
-
RE: Orv's ScriptNode+SNEX Journey
Lesson 3 - using the SNEX node with sample content.
This one is something I'm still getting my head around. @Christoph-Hart kindly provided a one shot SNEX node demo, which you can find by going to the example snippet browser in the Help menu:
This will open a whole new window where you can experiment with snippets. Maybe I'll go over the specific snippet in another post, but for this one... we're starting fresh, and we're going to just do the basics.
So.... here we have a basic interface script that gives us some file const references, an AudioSampleProcessor retreiving the AudioSampleProcessor from a Scriptnode Synthesiser in our module tree. That synthesiser has a DspNetwork assigned to it:
Right now, if we run the code... it will fail to find the AudioSampleProcessor as explained above. Let's add a SNEX node:
When you do this, it will be blank. You will need to click the three dot menu icon and choose "create new file" - strangely enough, you have to do this even when creating an embedded network. But fine. Let's do it:
We need to give it a name:
At this point, the node becomes active, indicated by the green highlight text:
Now if you open the same menu, you get more options:
We're going to select 'Add AudioFile':
You can see that now there is an extra icon in the SNEX node, which opens the AudioFile "Complex Data Editor" panel.
We can add an External AudioFile Slot using the icon on the right hand side:
And now you can see that the data editor will display whatever sample you assign to that AudioSampleProcessor from your script:
So here we see that file_r has been loaded into the buffer, and if we wanted to do file_f instead we could change the code to do that:
Note - you will not be able to playback the audio at this stage, as your SNEX code will be completely empty. If you click the icon on the right side, it will open the code editor for this SNEX node:
So whilst the data is loaded, our code isn't doing anything with it.
-
RE: Orv's ScriptNode+SNEX Journey
Lesson 2: loading samples.
Loading samples into an AudioSampleProcessor can be done by running a .setFile() call on the retrieved AudioSampleProcessor object.
However, on Windows, it is very easy to make this crash.
Consider the above image. The backslashes in files a-r, will cause HISE to crash to the desktop if you try to load any of those files into the AudioSampleProcessor.
file_s has double backslashes, and this does not seem to crash.
Another way to ensure the file loads, is to just use forward slashes:
-
Orv's ScriptNode+SNEX Journey
I've gotta start documenting some of my experiments, tests, failures, and successes, when it comes to ScriptNode's and SNEX, and interfacing with the Hisescript interface side of things.
So I thought I'd start a thread on it, so hopefully people can learn from my idiocy, and I can have something to reflect back on when I'm stuck.
So.... lesson 1: Synth.getAudioSampleProcessor.
This is a useful method for accessing the sample slot of a module, and generally if you're using the built in sampler or audio player modules, you won't run into this issue. But I just did, and it stumped me for ages, because of a misleading error message.
So here it is - if you are trying to do this with a Scriptnode Synthesiser, you need to be sure that your Scriptnode Synthesiser meets these conditions:
- It must have a node in it that contains some form of AudioFile: file_player, stretch_player, granulator, or the snex_node nodes.
- Your node from this list must be set to use an External AudioFile Slot.
If these conditions aren't met, then when you try to make a call to Synth.getAudioSampleProcessor, the console will return the following error:
Interface:! Line 3, column 42: Scriptnode Synthesiser1 was not found. {SW50ZXJmYWNlfG9uSW5pdCgpfDgwfDN8NDI=} Master Chain:! Line 3, column 42: Scriptnode Synthesiser1 was not found. {SW50ZXJmYWNlfG9uSW5pdCgpfDgwfDN8NDI=}
This error actually is not that useful. It really should say that the AudioSampleProcessor was not found in my opinion.
Here is a screenshot of a scenario where you could get this:
You'll notice that I have a Scriptnode Synthesiser in my module list, and it has a DspNetwork assigned to it. However the network is completely empty.
Now if I add a file_player node:
We still get the same error, because the file_player node does not have an external file slot added to it.
If we load a file to the embedded slot (bad practice imho) this also makes no difference:
We still cannot resolve the AudioSampleProcessor.
Finally... if we add an external slot, even when it is empty, we get the AudioSampleProcessor reference that we were looking for:
Something to watch out for.
-
RE: How do I get started with Scriptnode and building synthesisers/samplers?
@clevername27 said in How do I get started with Scriptnode and building synthesisers/samplers?:
@Orvillain Subscribe to @d-healey's Patreon. Also, read @Christoph-Hart's introductions/explanations of HISE, scripting and ScriptNode in the documentation. Learning the right way to do things will save you much trouble later on. His discussion is concise and thorough. Lastly, look at the Snippets and tutorial projects.
Yeah, have read all that. Been digging into the specifics in the documentation, but sometimes it is a little lacking. Have also dug out plenty of snippets and gone through the example projects too. HISE is pretty great!
-
How do I get started with Scriptnode and building synthesisers/samplers?
So... this is all new to me, and the documentation is pretty scant unfortunately.
So I was imagining the first thing I could do as a Hello World would be to throw in a sine-ton oscillator, and then an envelope, and I'd be able to get a polyphonic sine-tone synth that would let me set the attack and decay.
So I did that, and I notice as soon as I release a midi key, the oscillator stops producing noise. Same story with using a file player.
I kinda wasn't expecting that. A helping/guiding hand would be appreciated - are there any videos specifically targeted at building synths rather than effects?
-
RE: A learn when using the Engine.Builder functionality
@aaronventure Yeah the builder builds the module tree, and you get that in the plugin once it is exported. But any data you derive from the builder won't get initialised correctly, which is to be expected. I'd just overlooked it.
For example let's say I do:
const builder = Synth.createBuilder() const moduleReferences = {}; inline function buildTree() { local index = builder.create(builder.Modulators.AHDSR, // the module type "GainAHDSR", // the ID sampler, // the parent module builder.ChainIndexes.Gain); // the slot type moduleReferences[index] = Synth.getEffect("GainAHDSR"); }
A fairly contrived example... but the code above should work and inside HISE when developing, it will make the AHDSR, return the module tree index for it, and then store an object reference for it in the top level moduleReferences dictionary.
But in the plugin, since the create() method cannot run, we don't get an index, which means the moduleReferences insertion doesn't resolve, which means anything relying on that data further downstream, will not work properly.
In my scenario, it was all my broadcasters that broke. The samplers worked and played back, but the broadcasters I had setup to control module parameters from UI controls, were completely broken. Because the reference dictionary I had setup for their targets, were completely empty. They were completely empty because I was initialising them within my builder calls.
-
A learn when using the Engine.Builder functionality
So.... I did know that compiled plugins don't have access to the builder... but I didn't really absorb it.
I wrote a function that would create some containers, then store the index for the container in a global array.
It worked inside Hise when developing, but it didn't work inside the compiled plugin. Why? Because the builder.create call I was using doesn't run, so I don't get an index, so the global array is empty.
So, if you use the builder at all... make sure to separate out your concerns. Only use the builder for constructing the module tree, and don't expect to store the index variable from a builder.create() call, and use it elsewhere across your plugin.
Instead, write an additional function that performs whatever Synth.get call you need, to get a reference to the created module, once you've created everything.
I spent a chunk of today refactoring my code into two namespaces - TreeBuilder, and TreeConfigurator. The first just creates the modules I need according to a JSON object created earlier. The TreeConfigurator does all the hard work by getting references to modules, objects, and parameters, and storing them for future access.
The More You Know.
-
RE: Is blending between impulse responses possible?
@aaronventure said in Is blending between impulse responses possible?:
@Orvillain The CPU load scales (almost) linearly with IR length, so it all depends on that. Infinite Brass in Kontakt works with up to 3x convolution, the IRs of the biggest room are 3 seconds long or so (for an RT60 of around 1.5s),
In late 2018 when it launched, new CPUs could easily take it all on at low buffer sizes. Today, CPUs are almost 3x as fast.
I guess I just need to try it then!
-
RE: Is blending between impulse responses possible?
@aaronventure Yes agreed. So I guess the question is .... how many IR's is realistic to load up in Hise at once? Convolution is quite expensive isn't it ??
-
RE: Is blending between impulse responses possible?
I was thinking something along these lines:
https://youtu.be/ndRMtqJLNWgAs I understand it, these "drag a microphone across the front of a speaker graphic" interfaces are pretty much just a nicer way to browse a list of IR's. I think they're interpolating across a set of IR's that have been named properly so they can be laid out in a grid like fashion, to represent a real speaker+microphone experience. I think they use the Dynamount devices to capture the IR's.
This is the kind of thing I was thinking of building. So far as I know, they phase-align the IR's to ensure that all positions are equally "usable" - If you listen to some of the audio there, you can hear that it isn't exactly true to life, but good enough in most cases.
-
RE: macOS / GIT
Defo Fork. Github and Source Tree are pretty poor performance-wise.
-
RE: Is there any way to disable routing matrix reset behaviour when loading a sample-map or when creating a multi-mic set of samples?
Blooming blooms of bloomington! That was a lot easier than I had thought it was going to be! More or less the example in the documentation was all I needed to do. Here's the code for setting up the broadcaster. This happens in the main interface onInit function.
const var b = Engine.createBroadcaster({ id: "sampleListener", args: ["eventType", "samplerId", "data"] }); b.attachToSampleMap("Kick_sampler", "SampleMapChanged", ""); b.addListener("", "funky", function(eventType, samplerId, data) { build_sampler_routing(samplerId); });
This triggers a function called build_sampler_routing, which takes the samplerId in as an argument:
inline function build_sampler_routing(sampler_id) { local parts = sampler_id.split('_'); local slot_name = parts[0]; local routing_matrix = slots[slot_name]['sampler_routing_matrix']; routing_matrix.setNumChannels(sampler_channel_count); local slot_default_routing_data = default_channel_data[slot_name]; for (channel in slot_default_routing_data) { local channel_data = slot_default_routing_data[channel]; routing_matrix.addConnection(channel_data['source_idx'][0], channel_data['sampler_output_idx'][0]); if (channel_data['source_idx'].length > 1 && channel_data['sampler_output_idx'].length > 1) { routing_matrix.addConnection(channel_data['source_idx'][1], channel_data['sampler_output_idx'][1]); } } }
That's how I needed it to work for my script anyway. Your mileage may blooming well very.
Also shout out to @d-healey - signed up to your patreon guv, and learnt about broadcasters from your video. Would definitely love to see more there. Seems like broadcasters can handle a lot of the heavy lifting when it comes to communicating between different parts of a plugin.
-
RE: Simple ML neural network
I'd love to implement Neural Amp Modeller into HISE.
What are the chances of that???
-
RE: How do you bus multiple sampler outputs together?
@d-healey said in How do you bus multiple sampler outputs together?:
@Orvillain Check out my mic mixer video
Ahhhhhhhh. Got it. Okay, I get it now. Nice one.
-
Sampler - what happens when you use a mixture of mono and stereo samples with multi-mic mapping?
What happens when you use a mixture of mono and stereo samples as a multi-mic setup?? I just did a test with 5 mono samples, and I noticed that 10 audio channels got mapped when I performed the multi-mic mapping operation. Which seemed a bit odd. I was expecting 5 mono, but what I got was 10 mono.
I'm guessing it was duplicating the mono channel into stereo which is why I ended up with 10, but then what would happen if I had a mixture of mono and stereo?
I'm thinking something like a mono microphone on a saxophone, and then room left and right microphones, mixed down to a stereo file.
Would this result in 1+2 being a duplicate of the mono microphone, and then 3+4 being the left and right of the room microphones?