How are 'voices' defined? It's easy to hit hundreds of voices with even a simple instrument. Brass library with 4 dynamics and 4 mic positions, with dynamic crossfading = 16 voices per note.
Posts made by zircon
RE: Low-Level Event / Voice Handling Functionality
Maybe I'm not understanding the hierarchy of HISE.
So instead of selecting the groups you want to modify and then call the API function on this group selection (for example all release samples), you would simply add a sampler containing only the release samples and implement the functionality in a script of this sampler.
First of all, sometimes the same groups are used in different triggering contexts. For example, in one hypothetical case, slide samples could be used as part of a complex legato script in a note-on event, or triggered on release (sliding down from the released note).
Modifying groups is NOT desirable as I'm talking about modifying events independently. 10 different release events with the same groups might have different volume and tune change information.
Please post some examples you have in mind and let me check if you get along with the current system. Adding a note-based modification API would make sense, for example:
Note-based modification is fine but there are so many scenarios here that I'm not sure your approach addresses.
If you need some behaviour that needs to be applied to all samplers, you would add another ScriptProcessor in the container and implement the behaviour there (like eg. killing all notes when the sustain pedal is pressed)
This confuses me even more. So three separate scripts for three separate types of samples? Again, the samples can be triggered in totally different contexts, and all in a single note-on event. Very often how each sample type is triggered and processed depends on shared data related to note and release events. Splitting that up into three separate scripts sounds like an absolute nightmare and I would never do it.
The past few years I find myself using more voice event-based scripting than anything–like rather than relying on keeping track of voices and voice-related information through arrays. That might just be due to how limited Kontakt's arrays are, though.
FWIW, that's an interesting approach, but aren't you still making use of arrays - just arrays that are tracked automatically by Kontakt (all notes / voices being played)? That's basically what I'm asking for here - more low-level stuff related to voice/event tracking so we don't have to code our own and waste cycles.
Low-Level Event / Voice Handling Functionality
In Kontakt, access to active events + voices is limited without writing custom KSP. Any custom KSP of course is not going to be as efficient as having low-level functions available. For example, Kontakt has a variable called $ALL_EVENTS so that you can readily kill all active notes. But let's say you want to view and edit all events played on MIDI note 87; this is not possible without writing custom code.
So here's what I would like to see with this sampler…
1. A function that returns all active (alive) events for a given MIDI note. For example, if events A,B,C,D,E are active on midi note 44, I want to be able to retrieve those events with a single function so I can process them.
2. Functions that allow me to change the the volume, tuning, panning of a single event, OR a group of events. Likewise, the ability to fade in, fade out, or kill single AND groups of events. Kontakt has useful functions for changing these things for individual events, but in order to affect a group, you need to create your own array and figure out a way to populate it. I'd like functions that can affect groups just as easily, for example:
This might affect a single event, but I would love to be able to ALSO do:
This way if I wanted to affect all events on a certain MIDI note, I could just retrieve them (feature #1) and process them elegantly.
3. Some way to associate events in a group based on a given callback. For example, I have a Kontakt library which produces anywhere from 1 to 8 separate events when a single key is pressed. The reason for using separate events is because each event might use different groups and offsets at different volume levels, or with various modifications. Handling these events in Kontakt is a huge pain though because Kontakt does not naturally group them, even though they were triggered within the same callback.
What would be great then is to have something like a 'callback ID' separate from the generated event ID of the notes. The generated event IDs would then automatically be associated with the callback ID. Much like my request for #1, these groups of events should be tracked and available for me to process. For example... let's say I have a guitar library which generates 3 events on a single key press: one event for the pick sound, one for a sustain note, and one for a vibrato event. These are separate events because I may want to process them separately. But I may also want to swiftly kill all 3 events at once based on the callback ID.
Hopefully this makes sense - this is an area where Kontakt is lacking and where I always have to write custom handlers, groups, arrays, etc. and it's a big pain.
DFD Performance + Efficiency
Hi, happy to be here and see development of what looks to be a very promising tool.
I think that achieving high performance, stability, and efficiency is the #1 challenge. Mach5 is an example of a plugin with an incredible engine and tons of features, far more than Kontakt. But it has failed to achieve traction because, all things being equal, it suffers from higher CPU usage and worse efficiency when streaming samples.
A good test would be to take a pool of samples in the 1-5mb range with a DFD buffer of ~100kb and to play very high polyphony parts. Then compare the performance to Kontakt using the same samples. No scripting or effects - just pure streaming performance.
Next, testing large numbers of zones + groups - Kontakt can handle tens of thousands of loaded zones and 1000+ groups. This sampler needs to be able to match or beat that performance, and not choke with such large amounts of data. This may require approaching the design of the instrument at a low-level differently, so I wanted to bring it up now when development is earlier on.