Multimic related questions
-
I’m building the interface of about 25 samplers with builder and not done implementing features so I’d like to not lose the work from manually converting to multi mic and routing.
- Is there a way to create the samplemap object that will be supported by Sampler.loadSampleMapFromJSON(var jsonSampleMap) that will create the multimic set if configured properly? A multimic has multiple <file > tags; I did try replacing the value for FileName in the JSON with an array or an object, thinking that would be the logical way that the importer is checking for type. But maybe it’s using the MicPositions line in which case I won’t be able to edit that without file manipulation.
<samplemap ID="multi HH Splash" SaveMode="0" FileName="" MicPositions="Close_1;Front_MS;OH_M;OH_XY;Room;Wide;" RRGroupAmount="4.0" CrossfadeGamma="1.0"> <sample Root="73" LoKey="73" HiKey="73" LoVel="1.0" HiVel="31.0" RRGroup="1" Duplicate="0" Volume="-10"> <file FileName="{PROJECT_FOLDER}Hihat-HH_Splash-Close_1-1.wav"/> <file FileName="{PROJECT_FOLDER}Hihat-HH_Splash-Front_MS-1.wav"/> <file FileName="{PROJECT_FOLDER}Hihat-HH_Splash-OH_M-1.wav"/> <file FileName="{PROJECT_FOLDER}Hihat-HH_Splash-OH_XY-1.wav"/> <file FileName="{PROJECT_FOLDER}Hihat-HH_Splash-Room-1.wav"/> <file FileName="{PROJECT_FOLDER}Hihat-HH_Splash-Wide-1.wav"/> </sample>
I could create this by file manually, but thought I’d ask first before yet another side-quest on this journey. ;)
- Is there a way to set the number of channels and link them to their output channels on either containers or samplers? The only reference I see is RoutingMatrix.setNumChannels(int) but that seems to be for routing matrix modulators. I tried using it on a ChildSynth and got unknown function.
I also tried setting NumChannels via JSON but, like many other ‘key’ parts of modules, it is not accessible by this method (like Intensity, Bipolar, VelocityTableData, etc).
Thanks
-
@Fergler said in Multimic related questions:
about 25 samplers
Why so many?
-
I'm not sure I understand the question. There is a built in tool for merging mics and creating the correct values in the sample map xml.
-
The number of channels will automatically match the number of channels in the sample map.
-
-
@d-healey Is that actually a lot? I have no reference! :D
I originally had about 65 of them until I wrote some math to dynamically construct the round robin layers regardless of whether there’s 4 samples or 40 samples. Sampler’s need to have the same amount of round robins for every note so this workaround allowed me to merge them into at least 12 pieces + hihat (which is 5 open/close varieties * 2 strike points + stomp + splash).
I didn’t notice a performance issue with 65 but I’m glad for the project management convenience for sure.
To answer your reply to 1 - the tool is awesome but I can’t access it through scripting when using Builder.create() combined with loading the samplemaps generatively. So I was looking for a way to do that. I looked on github but cannot find the relevant code for how the Samplemap import works (I do not read C++ well.. I know Lua only from Reaper scripting), to verify whether it has the capability to load a JSON file that includes the multimic data already present and ready (resulting in the same thing as if you had exported a multimic samplemap and reimported it - automatic channel count, etc).
I am 75% of the way to reconstructing the XML file manually so I think I have this figured out now but I hope that explains things well enough.
-
@Fergler said in Multimic related questions:
Is that actually a lot?
Yup. I try to have no more than 4, my current project has 6 and I'm not happy about it :)
I mean there's no harm in having more if you know how to manage them but the fewer the better. Each one will use system resources and is more work to manage.
I see you're working on a percussion plugin. I think these generally need more samplers...
Usually the main reason for more samplers is to have different effects chains for each set of samples or because you need multiple sets of samples to trigger at once or in a certain way. Otherwise you can usually achieve whatever you need using groups within a single sampler.
So you want to be able to load additional sample maps into the compiled plugin? I don't believe there is anyway to access the mic merge facility via scripting or if it's available in a compiled project.
-
So you want to be able to load additional sample maps into the compiled plugin? I don't believe there is anyway to access the mic merge facility via scripting or if it's available in a compiled project.
No, just for development (and because the amount/distribution of files could change, and for future recordings). I got the XML export working, and now load that instead of the JSON.
-
@Fergler Then just use the mapping window and built in tools, no need to try and script it yourself.
-
I think some aspect of tone is getting lost in text here. With all due respect (earned by your great tutorials and resources you’ve put out) manually creating sample maps is out of the question for a project this size as any small change in the way I want to distribute those samples amongst velocity and RR will need to be redone for every strike. I have built this drum sampler already in DecentSampler but want to expand on it with HISE; as an example, one of the things I did there was bake in RMS values for each hit for all the skins and use that to determine where the samples were distributed over the velocity range so that they aligned with their actual real life curve. Tuned right, prevented some of that ’somethings off’ effect when playing ekit where the samples sound louder than their actual volume in the mid and quiet velocity ranges.
It’s not a complicated code - I just am just struggling with not knowing the full capabilities of the backend of things. And I’m aware of what a large task documentation is and documenting all my pain points along the way so I can contribute also because there are definitely areas that need love (Send Effects aren’t on the docs, Builder types and chains aren’t listed in a table, stuff like that that requires lots of searching or intuition/guessing).
The 1st question is really just a hopeful guess that in the code for the JSON import is perhaps an undocumented feature for additional objects within the filename element.
I have finished sample creation/import and it looks like this :
const BuildingEnabled = 1; const dataObj = Engine.loadFromJSON("../ArticulationData.json"); const velBase64 = "36.......HhHvC...vOO4iN+PDQD8CLb8nO...f+....9CDtdzO" const micSets = ["Close_1", "OH_XY", "OH_M", "Wide", "Front_MS", "Room"]; const micString = micSets.join(";")+";"; const kitPieces = [ ["China", "China", 9, 30], ["Crash_L", "L Crash", 12, 55], ["Crash_R", "R Crash", 13, 52], ["Hihat", "Hihat", 4, 46], ["Kick", "Kick", 1, 36], ["OpenSnare", "Open Snare", 3, 38], ["Ride", "Ride", 8, 51], ["Snare", "Snare", 2, 38], ["Splash", "Splash", 11, 65], ["Stack", "Stack", 10, 71], ["Tom_1", "Tom 1", 5, 48], ["Tom_2", "Tom 2", 6, 45], ["Tom_3", "Floor Tom", 7, 43], ]; function createModule(type, name, parent) { switch (type.toLowerCase()) { case "sampler": var moduleType = builder.SoundGenerators.StreamingSampler; break; case "container": var moduleType = builder.SoundGenerators.SynthChain; break; case "sinegen": var moduleType = builder.SoundGenerators.SineSynth; break; default: Console.print("Type not found, check spelling?"); } var module = builder.create(moduleType, name, parent, builder.ChainIndexes.Direct); return module; } function createSampleObj(fName, RemVol, note, loVel, hiVel, RRGroup, Duplicate) { var obj = { FileName: fName, Root: note, LoKey: note, HiKey: note, LoVel: loVel, HiVel: hiVel, RRGroup: RRGroup, Volume: RemVol, Duplicate: Duplicate, }; return obj; } var maxRRGroups = 0; var groupCount = -1; var sampleMap = []; if (BuildingEnabled) { var builder = Synth.createBuilder(); builder.clear(); var kickSine = createModule("sinegen", "Kick Sine", 0); builder.setAttributes(kickSine, { Gain: "1", Balance: "0.0", VoiceLimit: "1.0", KillFadeTime: "150.0", OctaveTranspose: "-1.0", SemiTones: "0.0", UseFreqRatio: "1.0", CoarseFreqRatio: "-1.0", FineFreqRatio: "0.2000000029802322", SaturationAmount: "0.0" }); builder.clearChildren(kickSine, builder.ChainIndexes.Gain); var kickSineADHSR = builder.create(builder.Modulators.AHDSR, "Kick Sine ADHSR", kickSine, builder.ChainIndexes.Gain); builder.setAttributes(kickSineADHSR, { AttackCurve: "0.5199999809265137", DecayCurve: "0.0", Attack: "45.0", AttackLevel: "-0.4200000166893005", Hold: "3.0", Decay: "1050.0", Sustain: "-100.0", Release: "4145.0", EcoMode: "1.0" }); var kickSinePitch = builder.create(builder.Modulators.SimpleEnvelope, "Kick Sine Pitch", kickSine, builder.ChainIndexes.Pitch); builder.setAttributes(kickSinePitch, { Retrigger: 1.0, Attack: 577.0, Release: 0.0, LinearMode: 1.0 }); builder.get(kickSinePitch, builder.InterfaceTypes.Modulator).setIntensity(-6); var kickSineVel = builder.create(builder.Modulators.Velocity, "Kick Sine Vel", kickSine, builder.ChainIndexes.Gain); builder.setAttributes(kickSineVel, { UseTable: 1, Inverted: 0, DecibelMode: 0, }); Synth.getTableProcessor("Kick Sine Vel").restoreFromBase64(0, velBase64); var kickSineMidi = builder.create(builder.MidiProcessors.ScriptProcessor, "Kick Sine MIDI", kickSine, builder.ChainIndexes.Midi); builder.connectToScript(kickSineMidi, "{PROJECT_FOLDER}KickSine.js"); // create special hihat container inline function rrGroupSize(sampleCount) { local rrGroups = sampleCount > 35 ? 6 : sampleCount > 24 ? 5 : sampleCount > 15 ? 4 : sampleCount > 11 ? 3 : sampleCount > 7 ? 2 : 1; return rrGroups; } var hihatContainer = createModule("container", "Hihat", 0); for (group in dataObj) { groupCount++; sampleMap = []; var hitCount = 0; var lastHit = ""; var xmlSample = ""; var xmlSample2 = ""; var xmlFiles = ""; var xmlString = ""; // Get the RR group size & # of hits for the sampler for (hit in dataObj[group]) { hitCount += 1; var rrGroupsCheck = rrGroupSize(dataObj[group][hit]["Layers"]); maxRRGroups = rrGroupsCheck > maxRRGroups ? rrGroupsCheck : maxRRGroups; lastHit = hit; } for (hit in dataObj[group]) { var groupName = (group == "Hihat") ? hit : kitPieces[groupCount][1]; var parent = (group == "Hihat") ? hihatContainer : 0; var xmlHeader = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<samplemap ID=\"" + ((group === "Hihat") ? hit : groupName) + "\" SaveMode=\"0\" RRGroupAmount=\"" + maxRRGroups + "\" MicPositions=\"" + micString + "\" CrossfadeGamma=\"1.0\">\n"; var sourceSampleCount = dataObj[group][hit]["Layers"]; var note1 = dataObj[group][hit].Note1; var note2 = dataObj[group][hit].Note2; var RemVol = dataObj[group][hit].RemVol; var groupSampler = createModule("Sampler", groupName, parent); builder.setAttributes(groupSampler, { VoiceLimit: 256, VoiceAmount: 256, OneShot: 1, }); builder.clearChildren(groupSampler, builder.ChainIndexes.Gain); function createSends(micSets, groupName, groupSampler) { // Create the SendEffect for this mic for (i=0; i<micSets.length; i++) { var multiChan = builder.create(builder.Effects.SendFX, groupName + micSets[i] + "Send", groupSampler, builder.ChainIndexes.FX); // Set attributes for the SendEffect builder.setAttributes(multiChan, { ChannelOffset: i*2, Gain: 1 }); } return multiChan; } createSends(micSets, groupName, groupSampler); //Console.print(trace(multiChan)); var ADHSR = builder.create(builder.Modulators.AHDSR, groupName + " ADHSR", groupSampler, builder.ChainIndexes.Gain); builder.setAttributes(ADHSR, { Attack: 0, AttackCurve: 0, Hold: 0, Decay: 300, DecayCurve: 0, Sustain: -0.32, Release: 0, Retrigger: 1, AttackLevel: 0, }); var Velocity = builder.create(builder.Modulators.Velocity, groupName + " Velocity", groupSampler, builder.ChainIndexes.Gain); builder.setAttributes(Velocity, { UseTable: 1, Inverted: 0, DecibelMode: 0, }); Synth.getTableProcessor(groupName + " Velocity").restoreFromBase64(0, velBase64); var Pitch = builder.create(builder.Modulators.SimpleEnvelope, groupName + " Pitch", groupSampler, builder.ChainIndexes.Pitch); if (group == "Hihat") { sampleMap = []; xmlString = ""; // additional resets for (hit in dataObj.Hihat) { var ranges = ["Open", "A", "B", "C", "Firm"]; var rangeString = hit.replace("HH_", "") .replace("Shoulder_", "") .replace("Tip_", ""); if (ranges.contains(rangeString)) { var midiScript = builder.create(builder.MidiProcessors.ScriptProcessor, hit + " Midi", groupSampler, builder.ChainIndexes.Midi); builder.connectToScript(midiScript, "{PROJECT_FOLDER}Hihat/Hihat" + rangeString + ".js"); } // if range contains rangeString } // for hit in dataObj.Hihat } // if group is Hihat var rrGroups = rrGroupSize(sourceSampleCount); var velocityLayers = Math.ceil(sourceSampleCount / maxRRGroups); var finalSampleCount = maxRRGroups * velocityLayers; var remainder = finalSampleCount % sourceSampleCount; var sampleCount = 0; for (velLayer = 0; velLayer < velocityLayers; velLayer++) { for (currentRR = 1; currentRR <= maxRRGroups; currentRR++) { sampleCount++; xmlFiles = ""; var loVel = 1 + Math.floor((126 * (velLayer)) / velocityLayers); var hiVel = Math.floor((127 * (velLayer + 1)) / velocityLayers); xmlString += " <sample Root=\"" + note1 + "\" LoKey=\"" + note1 + "\" HiKey=\"" + note1 + "\" LoVel=\"" + loVel + "\" HiVel=\"" + hiVel + "\" RRGroup=\"" + currentRR + "\" Duplicate=\"1\" Volume=\"" + RemVol + "\">\n"; for (i = 0; i < micSets.length; i++) { // for each mic var mic = micSets[i]; var sampleNum = (sampleCount + remainder > sourceSampleCount) ? sampleCount - remainder : sampleCount; var fName = "{PROJECT_FOLDER}" + group + "-" + hit + "-" + mic + "-" + sampleNum + ".wav"; sampleMap.push(createSampleObj(fName, RemVol, note1, loVel, hiVel, currentRR, 0)); xmlFiles += " <file FileName=\"" + "{PROJECT_FOLDER}" + group + "-" + hit + "-" + mic + "-" + sampleNum + ".wav\" />\n"; if (typeof note2 == "number") { sampleMap.push(createSampleObj(fName, RemVol, note2, loVel, hiVel, currentRR, 0)); } } // for each mic xmlString += xmlFiles + "</sample>\n"; if (typeof note2 == "number") { xmlString += " <sample Root=\"" + note2 + "\" LoKey=\"" + note2 + "\" HiKey=\"" + note2 + "\" LoVel=\"" + loVel + "\" HiVel=\"" + hiVel + "\" RRGroup=\"1\" Duplicate=\"1\" Volume=\"" + RemVol + "\">\n" + xmlFiles + "</sample>\n"; } } // for each RR } // for each velLayer if (hit == lastHit || group == "Hihat" ) { xmlString = xmlHeader + xmlString + "</samplemap>"; if (groupCount == 12) { //Console.print(xmlString); } var xmlSampleMap = FileSystem.getFolder(FileSystem.AudioFiles).getChildFile(groupName+".xml"); xmlSampleMap.writeString(xmlString); //Console.print(xmlSampleMap); Synth.getSampler(groupName).loadSampleMap(xmlSampleMap.toString("FullPath")); //Engine.dumpAsJSON(sampleMap, groupName + ".json"); //Synth.getSampler(groupName).loadSampleMapFromJSON(sampleMap); } } // for each hit //if (stopper > 1) break; // Lets take it slow } // group builder.flush(); } // BuildingEnabled else { for (group in dataObj) { groupCount++; for (hit in dataObj[group]) { var groupName = (group == "Hihat") ? hit : kitPieces[groupCount][1]; //Synth.getSampler(groupName).saveCurrentSampleMap(groupName); } } } include("LookAndFeel.js"); include("UI.js");
-
Basically what I do (for mapping many 1000s of samples) is I put all the important mapping info in the samples name and use HISE's auto mapper to put them into the correct note ranges, velocity ranges, and groups. After that I select all the samples and merge them.
If I need to batch adjust properties after they've been mapped, like pitch/pan/volume/etc. I use scripting, following the basic ideas from here - https://forum.hise.audio/topic/64/fun-with-regex
-
I don’t have a way around multiple Samplers if I need AHDSR on each right? I could be wrong but even applying a volume curve to each sample each tick of a knob turn sounds expensive comparatively using regex.
-
You can fade notes in/out using scripting (you are right that you don't want to be changing sample properties in real time).
What do you want individual envelope control over? Is it one envelope per drum?