Low-Level Event / Voice Handling Functionality
-
In Kontakt, access to active events + voices is limited without writing custom KSP. Any custom KSP of course is not going to be as efficient as having low-level functions available. For example, Kontakt has a variable called $ALL_EVENTS so that you can readily kill all active notes. But let's say you want to view and edit all events played on MIDI note 87; this is not possible without writing custom code.
So here's what I would like to see with this sampler…
1. A function that returns all active (alive) events for a given MIDI note. For example, if events A,B,C,D,E are active on midi note 44, I want to be able to retrieve those events with a single function so I can process them.
2. Functions that allow me to change the the volume, tuning, panning of a single event, OR a group of events. Likewise, the ability to fade in, fade out, or kill single AND groups of events. Kontakt has useful functions for changing these things for individual events, but in order to affect a group, you need to create your own array and figure out a way to populate it. I'd like functions that can affect groups just as easily, for example:
changeEventVolume(EventID,VolumeAdjustment,OtherArguments)
This might affect a single event, but I would love to be able to ALSO do:
changeEventVolume(getEvents(MIDINote),VolumeAdjustment,OtherArguments)
This way if I wanted to affect all events on a certain MIDI note, I could just retrieve them (feature #1) and process them elegantly.
3. Some way to associate events in a group based on a given callback. For example, I have a Kontakt library which produces anywhere from 1 to 8 separate events when a single key is pressed. The reason for using separate events is because each event might use different groups and offsets at different volume levels, or with various modifications. Handling these events in Kontakt is a huge pain though because Kontakt does not naturally group them, even though they were triggered within the same callback.
What would be great then is to have something like a 'callback ID' separate from the generated event ID of the notes. The generated event IDs would then automatically be associated with the callback ID. Much like my request for #1, these groups of events should be tracked and available for me to process. For example... let's say I have a guitar library which generates 3 events on a single key press: one event for the pick sound, one for a sustain note, and one for a vibrato event. These are separate events because I may want to process them separately. But I may also want to swiftly kill all 3 events at once based on the callback ID.
Hopefully this makes sense - this is an area where Kontakt is lacking and where I always have to write custom handlers, groups, arrays, etc. and it's a big pain.
-
What use cases do you have in mind for this feature?
To be honest, I would love to avoid voice handling for scripts (since it leads to very complicated scripts and callbacks for voice start will be executed far more often with a perfomance penalty). Also there is one really big difference between KONTAKT and HISE and this is the ability to place different scripts on different sampler modules (in fact, the tree hierarchy of this engine is the biggest advantage for HISE in my opinion).
So instead of selecting the groups you want to modify and then call the API function on this group selection (for example all release samples), you would simply add a sampler containing only the release samples and implement the functionality in a script of this sampler.
Please post some examples you have in mind and let me check if you get along with the current system. Adding a note-based modification API would make sense, for example:
Synth.setNoteTune(noteNumber, -6.0); // - 6dB Synth.setNotePitch(noteNumber, -24.0); // - 24 cents
@3cw4wqc6:
let's say I have a guitar library which generates 3 events on a single key press: one event for the pick sound, one for a sustain note, and one for a vibrato event. These are separate events because I may want to process them separately. But I may also want to swiftly kill all 3 events at once based on the callback ID.
I suggest you add a container with three samplers for the sample types and write a script processor for each sampler without caring about voices at all (since they are automatically grouped "semantically").
If you need some behaviour that needs to be applied to all samplers, you would add another ScriptProcessor in the container and implement the behaviour there (like eg. killing all notes when the sustain pedal is pressed)
This approach results in many script processors which contain little code because they are highly specified. I didn't use KONTAKT scripting for a few years now, but I remember the most stuff you need to write is boilerplate code to divide the flat group hierarchy.
But if you have something that really can't be done with my approach, I'll think about voice handling in scripting
-
I will just chime in really quickly: I think we should avoid creating a workflow where you need to have wait statements to execute a series of voice manipulations… like (pseudo code):
wait(10) fade out(-6db in 10 seconds)
wait(10) fade out(-3db in 10 seconds)
wait(10) fade out(-1db in 10 seconds)...the above being an attempt at creating an envelope in kontakt... ridiculous. I think HISE offers (or can offer) a better solution.
-
The past few years I find myself using more voice event-based scripting than anything–like rather than relying on keeping track of voices and voice-related information through arrays. That might just be due to how limited Kontakt's arrays are, though.
So with a guitar library, you might want to attach data to the voice, like which string is played, for example. Then if you need to mute all the notes on that one string, you can loop through the playing notes and selectively mute ones based on that data tied to the voice.
-
Maybe I'm not understanding the hierarchy of HISE.
@3k6ikdni:
So instead of selecting the groups you want to modify and then call the API function on this group selection (for example all release samples), you would simply add a sampler containing only the release samples and implement the functionality in a script of this sampler.
First of all, sometimes the same groups are used in different triggering contexts. For example, in one hypothetical case, slide samples could be used as part of a complex legato script in a note-on event, or triggered on release (sliding down from the released note).
Modifying groups is NOT desirable as I'm talking about modifying events independently. 10 different release events with the same groups might have different volume and tune change information.
@3k6ikdni:
Please post some examples you have in mind and let me check if you get along with the current system. Adding a note-based modification API would make sense, for example:
Note-based modification is fine but there are so many scenarios here that I'm not sure your approach addresses.
@3k6ikdni:
If you need some behaviour that needs to be applied to all samplers, you would add another ScriptProcessor in the container and implement the behaviour there (like eg. killing all notes when the sustain pedal is pressed)
This confuses me even more. So three separate scripts for three separate types of samples? Again, the samples can be triggered in totally different contexts, and all in a single note-on event. Very often how each sample type is triggered and processed depends on shared data related to note and release events. Splitting that up into three separate scripts sounds like an absolute nightmare and I would never do it.
@3k6ikdni:
The past few years I find myself using more voice event-based scripting than anything–like rather than relying on keeping track of voices and voice-related information through arrays. That might just be due to how limited Kontakt's arrays are, though.
FWIW, that's an interesting approach, but aren't you still making use of arrays - just arrays that are tracked automatically by Kontakt (all notes / voices being played)? That's basically what I'm asking for here - more low-level stuff related to voice/event tracking so we don't have to code our own and waste cycles.
-
I encourage you to describe use-case scenarios, I think that's the only way we are going to have a productive discussion as Kontakt ways of doing things may not be compatible with HISE. HISE is trying to be forward-thinking. I would suggest writing in the form of "how would one accomplish this in HISE."
-
The tree hierarchy is the most important difference between HISE and KONTAKT and I chose this data structure because it allows encapsulation of logic that can be never done with a flat list like the group structure of KONTAKT.
Sample Libraries consist of many different types of samples - the most popular being sustain samples, release samples and legato samples. Throwing them all into one big bucket and then trying to adress these types within the big melting pot of groups screams for needless boilerplate scripting code:
disallow_group($ALL_GROUPS) allow_group(1,3,6) // do some random stuff here allow_group($ALL_GROUPS)
I remember writing this type of script code all over the place - before I decided to roll my own sampler
This is because when KONTAKT and their core design was developed, sample libraries weren't that big (legato transitions were the absolute exception and most libraries didn't even had release trigger samples). But thanks to KONTAKT (got to give them that - the sample library technology emerged pretty quickly and I think KONTAKT got overrun by its users.So now we have the chance to look at today's requirements for sample libraries (and with Elan's phaselocking stuff maybe also about tomorrows requirements) and build a system which has these principles as core organization logic.
@167ixo55:
This confuses me even more. So three separate scripts for three separate types of samples? Again, the samples can be triggered in totally different contexts, and all in a single note-on event. Very often how each sample type is triggered and processed depends on shared data related to note and release events. Splitting that up into three separate scripts sounds like an absolute nightmare and I would never do it.
For me this is the opposite of an absolute nightmare. The ability to write small, highly specified scripts (which sometime consist of only a few lines of code) allows developer-friendly workflows like encapsulation and can be far easier debugged and maintained than the muliple thousand line monster scripts in KSP.
We are not talking about having to duplicate logic into three scripts. Consider this tree
Root container Release Samples Legato Samples Sustain Samples
You would then add one script with release trigger functionality on the first child (this is one line of code), legato logic on the second (again 2 - 5 lines of code to get basic legato transitions working) and a script on the sustain sample that does whatever you like (can't think of a useful script in this simple context
But now you want to change the velocity curve for the instrument. This should affect all samples so you add one ScriptProcessor on the root container with a script that allows this functionality.
@167ixo55:
For example, in one hypothetical case, slide samples could be used as part of a complex legato script in a note-on event, or triggered on release (sliding down from the released note).
This can be achieved by adding some code in the onNoteOff callback of the legato script. Something like that should be enough:
function onNoteOff() { if(legatoReleaseWanted()) // whatever condition you need { Synth.enableRoundRobin(Message.getNoteNumber()); // set the group to the note number (in legato samplers I advice using the groups as start note numbers) Synth.playNote(Message.getNoteNumber() - 12, whateverVelocity); // play the octave down slide } }
Another way would be to reuse the same samples in another sampler called "Release Trigger Downslide" (samples will not be loaded twice if they are referenced in two "zones" or samplers). This approach would be better if you want eg. different envelope times for the two types.
I also encourage you for more examples and use cases - I can understand this different data structure concept needs a different approach in the instrument design but I am confident this is the way to go.