The Sample Map of The Future: Escaping the 20th Century Sample Mapping Paradigm
-
@d-healey yeah the whole concept is just a filesystem function with a string parser, and a different sample map approach with a list/browser. The latter part is what would require more work than the former, but would make for an easy way to find and edit the files in the list using the built in editor.
-
@aaronventure there is a list of samples already that you can filter by filename tags - and with the selection functions you can create your own custom mapping system quite easily.
-
I like the idea of extending the samplemap feature, but it has to be as performant as the current system. Voice allocation and sample selection is a huge bottleneck so any solution that is slower than the current one is completely useless.
It can also be as minimal invasive as possible, it doesn't require a separate Sampler+ module IMO. Also we can still maintain the basic 2D mapping for note / velocity, there's hardly any instrument that does not organise itself into these two axis. The "problem" you all are having it is that currently there is only one other dimension (the RR group) axis available and as soon as your project requires more than one additional level of organisation (in most cases both dynamic XFade & RR group), you have to navigate around that issue.
With the performance restrictions I was thinking about having an array of bits assigned to each sample that you can write into each sample. Then you can simply activate / deactivate whatever you want. It requires some bit-fiddling on the developer side (and we might wrap that into a better UX), but this would leave the sample selection as fast as it is currently.
I'll play around with a concept, but in the end we might even reuse the
RRGroup
property, but treat it as a 64bit data slot that can be assigned to multiple "dimensions". -
Alright, I cooked up an example and how it will look on both the parser and the playback side. Bear with me, it's very nerdy with bit manipulations etc., but that part can be made easier with some tools later on. Performance first.
So, we have this sample set that we want to cram into a single sampler for some reason:
- 7 RR Groups
- 4 dynamic fades
- 3 different types, hot swappable via key switch (sustain, sordino, tremolo)
Now we assign a certain bit range to each property so that it can be encoded into one integer number: we need 3 bits for RR, 2 bits for XFades and 2 bits for the type. Here's an example of how to encode a few samplenames:
Sample filename 0bRRRXXTT R == RR, X = Xfade, T = Type C2_RR1_XF1_sus.wav 0b0000000
RR1 (0b000), XF1(0b00), Type 1 (0b00) C2_RR7_XF1_sor.wav 0b1100001
RR7 (0b110), XF1(0b00), Type 2 (0b01) C2_RR5_XF4_tre.wav 0b1001110
RR5 (0b100), XF4(0b11), Type 3 (0b10) Writing a parser for this is very straight forward and offers enough flexibility to cope with all kinds of different setups:
var tokens = filename.split("_"); const var RR_GROUPS = ["RR1", "RR2", "RR3", "RR4", "RR5", "RR6", "RR7"]; const var XF_GROUPS = ["XF1", "XF2", "XF3", "XF4" ]; const var TYPE_IDS = [ "sus", "sor", "tre"]; const var rr_token = RR_GROUPS.indexOf(tokens[1]); const var xf_token = XF_GROUPS.indexOf(tokens[2]); const var type_token = TYPE_IDS.indexOf(tokens[3]); const var rrMask = rr_token << 4; // move four bits to the left const var xfMask = xf_token << 2; // move two bits to the left const var tpMask = typeToken << 0; // does nothing, just for coherence const var sampleId = rrMask | xfMask | tpMask; // Store it to the RR group property s.set(Sampler.RRGroup, sampleId);
On the playback side we'll add a new API call: Sampler.addGroupFilter() adds a bit mask to a given event that will be used to determine whether the sample should be played or not. These calls can be combined (behind the scenes it will boil down to a single bit compare operation per sample, so it's as fast as it can get)
// Define our bit masks to pass to the sampler // These are constant values that filter out the other bits // using bitwise AND const var TP_FILTER = 0b0000011; // only look at the last two bits const var XF_FILTER = 0b0001100; // only look at the middle bits const var RR_FILTER = 0b1110000; // only look at the first three bits function onNoteOn() { // We want to play all Xfade tremolo samples with RR2 now local id = Message.getEventId(); local typeToPlay = 2; local rrToPlay = 1; Sampler.clearGroupFilters(id); Sampler.addGroupFilter(id, RR_FILTER, rrToPlay << rrMask); Sampler.addGroupFilter(id, TP_FILTER, typeToPlay); }
Behind the scenes the sampler will do this:
// requested bit mask filters: - 0b1110000 -> 0b0010000 - 0b0000011 -> 0b0000001 // combined (calculated before looping through each sample): - 0b1110011 -> 0b0010001 // query each sample: bool playSample = (sample.get(RRGroup) & 0b1110011) == 0b0010001;
@aaronventure would that offer you enough flexibility to leave the 20th century behind (ironically by applying computing tools from the 70ies)?
-
@Christoph-Hart Nice, bitmasking is always such fun :)
-
@Christoph-Hart can't it set the filters automatically based on how many tokens we got? or are the consts just to demonstrate what's happening?
This is going over my head a bit (heh), my bit manipulation is very rusty.
I'm happy to leave the backend specifics to you, I agree that there shouldn't be a performance regression.
Does your filter idea play multiple stuff (like all the layers of a note) with a single event?
-
can't it set the filters automatically based on how many tokens we got?
As I said, we can make it easier to use after we get the main principle right. All that bit shuffling can be shoveled under the rug. My current suggestion would be to supply the sampler with a list of articulations and their properties and it does all the parsing & bit mangling on its own:
// create a group manager that will do all this stuff so that Lindon doesn't need // to calculate how many bits are required for 13 RR groups const var gm = Sampler.createGroupManager([ { ID: "RR", // can be used in the UI to filter / display the samples LogicType: "RoundRobin", // the logic type is a hint for the default behaviour IndexInFileTokens: 1, // used by the parser to calculate the bit mask NumMaxGroups: 7, // required for calculating the amounts of bits needed }, { ID: "XF", LogicType: "Crossfade", IndexInFileTokens: 2, NumMaxGroups: 4, }, { ID: "PlayingStyle", LogicType: "Keyswitch", // this is just one of a few new "modes" that we can supply IndexInFileTokens: 3, NumMaxGroups: 3, } ]); // Goes through all samples, calculates the bit masks and writes the result into // the RRGroup index (does the parsing part for you) gm.updateBitMask(Sampler.createSelection(".*));
with this approach you even don't need to script anything anymore on the playback side (you still can of course for custom group logic, but the bases are covered), plus there is even more room for optimization as it can create sample lists for each key switch group and then it doesn't have to look through the entire sample array but can pre filter the current playing type.
Does your filter idea play multiple stuff (like all the layers of a note) with a single event?
Everything that isn't masked out will be played, yes - in my example above, all the XFade layers pass the bit mask test and are flagged for playing.
-
Edit: I just noticed you said you'd keep the basic note/vel mapping. I'll boil my question down to "Does this mean XFade groups would be added as an axis separate from RRs, and will I still be able to use Enable MIDI Selection?"
Original novel:
This looks like a dream come true as someone traumatized by too many huge orchestral libraries done in Kontakt.
I'm curious how you see it interacting with the existing controls. Is the idea that in this mode, samples could only be played back with Sampler.addGroupFilter(), or would assigning a sample a note and rr value still put it in the mapping editor? The xfades would need a way to be edited, were you thinking to move the existing group xfade controls out of the RR groups? Or would it all be managed by script?
The library I'm working on would not have been possible without HISE's "Enable MIDI selection", which let me adjust volume, pitch, start, and end of my legato samples to match them to the sustains. In Kontakt, without combined mics, it's almost impossible to do, though Jasper Blunk masochistically manages it with table modulators and force of will. Spitfire and OT probably do it with their dedicated (proprietary) players. Point is this has always been out of reach for me.
The downside was I needed to use RRs for variants -- vowels in this case. Crossfaded dynamics got split out into separate samplers and controlled with a global modulator.
If "XFade" is added as a native concept, and if it were added as a separate axis, similar to how RRs are now, with a dropdown to filter for a given xfade layer in the mapping editor, that "Enable MIDI selection" workflow would still be possible.
I'm biased, as almost all libraries I've worked on have multiple arts, some with rrs, some with xfaded dynamic levels, some with both. I think supporting keyswitching natively is maybe overkill, as separate samplers for separate articulations is logical and works fine, but maaaaybe it's worth natively supporting crossfaded dynamic levels in the UI?
-
or would assigning a sample a note and rr value still put it in the mapping editor?
Yup, Samples are still mapped in a 3+ dimensional space. Currently it's X = note, Y = Velocity, Z = RRGroup. With the new system it's X = Note, Y = Velocity, Z = BITMASK!!! and this bitmask can handle multiple dimensions of organisation (keyswitches, articulation types, RR, whatever you need).
The library I'm working on would not have been possible without HISE's "Enable MIDI selection", which let me adjust volume, pitch, start, and end of my legato samples to match them to the sustains.
This should still be possible, it just selects what is played back most recently. If you have now multiple types in one sampler (eg. sustains and legatos), it might be a little bit tricky, but we could then also apply the bitmask filter on what to display
-
@Christoph-Hart In your example samples get assigned both an RR and a group XFade, were you planning to add separate controls to manage and filter group xfades?
-
@Christoph-Hart said in The Sample Map of The Future: Escaping the 20th Century Sample Mapping Paradigm:
or would assigning a sample a note and rr value still put it in the mapping editor?
Yup, Samples are still mapped in a 3+ dimensional space. Currently it's X = note, Y = Velocity, Z = RRGroup. With the new system it's X = Note, Y = Velocity, Z = BITMASK!!! and this bitmask can handle multiple dimensions of organisation (keyswitches, articulation types, RR, whatever you need).
The library I'm working on would not have been possible without HISE's "Enable MIDI selection", which let me adjust volume, pitch, start, and end of my legato samples to match them to the sustains.
This should still be possible, it just selects what is played back most recently. If you have now multiple types in one sampler (eg. sustains and legatos), it might be a little bit tricky, but we could then also apply the bitmask filter on what to display
Ok well Im pretty much in - where do I sign?
Just for my clarity tho - we now "map" samples into the 3d space by using the 2D (velocity x note) space in the editor - and we use the name of the wav file to assign it in the n-dimension Bitmask - so some set of tokens like:
InstrumentName_Velocity_NoteNum_MicPosition_RRGroup_XFadeGroup.wav
Where our bitmasking system ignores the velocity, note num, and mic position tokens..because we've defined a set(array) of strings that define what will appear for each position in our bitmask, e.g.round robins:
const var RR_GROUPS = ["RR1", "RR2", "RR3", "RR4", "RR5", "RR6", "RR7"];
xfades:
const var XF_GROUPS = ["XF1", "XF2", "XF3", "XF4" ];
Is this right?
In fact wouldnt it be much nicer if we were not declaring these token arrays every time we loaded a new sample map - and we moved them into the samplemap xml itself?
or is that waht this thing is for?
ID: "XF", LogicType: "Crossfade", IndexInFileTokens: 2, NumMaxGroups: 4,
-
@Christoph-Hart said in The Sample Map of The Future: Escaping the 20th Century Sample Mapping Paradigm:
Everything that isn't masked out will be played, yes - in my example above, all the XFade layers pass the bit mask test and are flagged for playing.
Can we filter it further, play a note, then filter again within the same callback? The idea in the original post is to make use of the eventData concept to allow per-sample modulation.
Or is the thing that we now use the good old note x vel 2d map to choose the individual samples?
-
Is this right?
Yes.
Can we filter it further, play a note, then filter again within the same callback?
You can always apply your own filters to filter out a layer. I'm currently writing the data container, it will all clear up soon.
-
@Christoph-Hart said in The Sample Map of The Future: Escaping the 20th Century Sample Mapping Paradigm:
Is this right?
Yes.
In fact wouldnt it be much nicer if we were not declaring these token arrays every time we loaded a new sample map - and we moved them into the samplemap xml itself?
-
@Lindon said in The Sample Map of The Future: Escaping the 20th Century Sample Mapping Paradigm:
In fact wouldnt it be much nicer if we were not declaring these token arrays every time we loaded a new sample map - and we moved them into the samplemap xml itself?
sure, if the layout changes between samplemaps then this shouldn't be an issue.
-
@Christoph-Hart Im trying to work out in my head(and lets be frank theres not much room left in there)... how I might use this to manage True Legato - so thats start note fades to transitioning sound fades to target note.
If it doesnt obviously support this in a simple way maybe we can think about how a 21st century sampler would manage that......
-
@Lindon I‘m working on exact that use case right now. I just need to work out a way how to make it efficient in the backend.
-
I don't see this particularly affecting legato. I would still put sustains in one sampler and legato samples in another, which makes it easier to adjust envelopes and manage the sample maps.
The biggest change for me here is being able to use dynamic xfades and round robins simultaneously in one sampler.
-
@Simon having legatos in the same sampler does bring some benefits (eg. Automatic gain matching like with the release start and zero cross aligning the start for less phasing during the fade).
In the end it‘s just another filter with 128 options - the legato sample will be mapped to the target note and this filter is set to the source note.
-
@Christoph-Hart For automatic gain matching, I'll gladly put all my legatos in the same sampler :)