HISE Logo Forum
    • Categories
    • Register
    • Login

    Dynamic EQ Yet?

    Scheduled Pinned Locked Moved General Questions
    19 Posts 5 Posters 622 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ?
      A Former User @d.healey
      last edited by

      @d-healey Probably nothing you don't already know and actively do - find places where people are selling similar things and email them 😂

      Namely affiliate stores like pluginBoutique and review pages like SampleLibraryReview or BedroomProducersBlog.

      Might get some slots sorted out for Black Friday with VI-Control as well if there's any left

      I've had multiple people suggest I try and find influencers or content creators on youtube/instagram/tiktok but I'm clearly searching for the wrong things because I can't find any

      1 Reply Last reply Reply Quote 1
      • hisefiloH
        hisefilo @d.healey
        last edited by

        @d-healey said in Dynamic EQ Yet?:

        Got any tips?

        Got some tips. I mean just one tip. Focus on excelent innovative product ideas.
        In my super short experience, press, affiliates, advocates, influencers, retailers will come to you

        ? 1 Reply Last reply Reply Quote 1
        • ?
          A Former User @hisefilo
          last edited by

          @hisefilo said in Dynamic EQ Yet?:

          innovative product ideas.

          I think ML might have some potential for innovation - I managed to turn a tree-smack into a taiko from a few minutes of training using Magenta

          unfortunately it's mono and only 16khz so there's some work left to be done :P

          before:

          Your browser does not support the audio element.

          after:

          Your browser does not support the audio element.

          hisefiloH ? 3 Replies Last reply Reply Quote 1
          • hisefiloH
            hisefilo @A Former User
            last edited by

            @iamlamprey said in Dynamic EQ Yet?:

            think ML might have some potential for innovation -

            Absolutelly. I think this kind of things are the best marketing you can invest your time in

            1 Reply Last reply Reply Quote 0
            • hisefiloH
              hisefilo @A Former User
              last edited by

              @iamlamprey said in Dynamic EQ Yet?:

              tree-smack

              I always dreamed with the idea of a ML algo that can learn articulations from real players. Also learning from styles.... Like if you can choose to play a trumpet into a Miles style

              ? 1 Reply Last reply Reply Quote 0
              • ?
                A Former User @hisefilo
                last edited by

                @hisefilo I'm sure it's possible, just need someone infinitely smarter than me to develop it 😂

                my ultimate plugin would be a 5mb vst that the end user opens, types "aggressive guitar" or "soft piano" and gets an ML-derived physical model with unlimited expressivity...

                hisefiloH 1 Reply Last reply Reply Quote 0
                • hisefiloH
                  hisefilo @A Former User
                  last edited by hisefilo

                  @iamlamprey I think you are smart enough. I i know nothing about ML. But I guess you can train a model to classify audios depending on spectal data? I mean a ppp C4 note from trumpet will always look similar, no matter the trumpet

                  ? 1 Reply Last reply Reply Quote 0
                  • ?
                    A Former User @hisefilo
                    last edited by

                    @hisefilo I'm only smart enough to use someone else's model, writing my own would result in my head exploding at least 3 times

                    But I guess you can train a model to classify audios depending on spectal data? I mean a ppp C4 note from trumpet will always look similar, no matter the trumpet

                    I believe a guided model could be taught what a ppp trumpet C4 sounds like (and every other note), with a big enough model on the actual instrument, you could then teach that new model the disparities of different players... in theory so...

                    "now that we know what the instrument sounds like, here's it being played by different musicians"

                    1 Reply Last reply Reply Quote 0
                    • lalalandsynthL
                      lalalandsynth @d.healey
                      last edited by

                      @d-healey Yes, optimize your website and work hard on the SEO.
                      That is the cheapest marketing you can get and effective.
                      And yet us little guys seem to ignore it a lot of the time.

                      This is what I am doing with my synth website these days , its a lot of boring work but worth doing.

                      https://lalalandaudio.com/

                      https://lalalandsynth.com/

                      https://www.facebook.com/lalalandsynth

                      https://www.facebook.com/lalalandsynth

                      1 Reply Last reply Reply Quote 0
                      • ?
                        A Former User @A Former User
                        last edited by A Former User

                        @iamlamprey said in Dynamic EQ Yet?:

                        @hisefilo said in Dynamic EQ Yet?:

                        innovative product ideas.

                        I think ML might have some potential for innovation - I managed to turn a tree-smack into a taiko from a few minutes of training using Magenta

                        unfortunately it's mono and only 16khz so there's some work left to be done :P

                        before:

                        Your browser does not support the audio element.

                        after:

                        Your browser does not support the audio element.

                        Did you use NSynth? Machine learning is a really interesting subject. I’ve been diving into it as well in recent weeks…

                        ? 1 Reply Last reply Reply Quote 0
                        • ?
                          A Former User @A Former User
                          last edited by

                          @UrsBollhalder I used DDSP from Magenta:

                          Link Preview Image
                          GitHub - magenta/ddsp: DDSP: Differentiable Digital Signal Processing

                          DDSP: Differentiable Digital Signal Processing. Contribute to magenta/ddsp development by creating an account on GitHub.

                          favicon

                          GitHub (github.com)

                          Starting with training a model using the autoencoder:

                          Link Preview Image
                          Google Colab

                          favicon

                          (colab.research.google.com)

                          Then bringing that into the timbre transfer demo

                          Link Preview Image
                          Google Colab

                          favicon

                          (colab.research.google.com)

                          The resynthesis tools in DDSP were exactly what I was looking for in terms of realistic synthesis of sounds - it uses a combination of Additive Synthesis, Subtractive Noise and deconvolution and sounds pretty good

                          The main issues I came across were the samplerate limitation (16khz) and the fact that it only train as mono signals, I'm not well-versed in python so I couldn't figure out how to fix those two things...

                          ? 1 Reply Last reply Reply Quote 0
                          • ?
                            A Former User @A Former User
                            last edited by

                            @iamlamprey Yes. I've been going through those repos as well! 😂
                            How much time did it take you to compute those algos? I am not sure why they use the reduced sample rate etc. Probably to make it more accessible for web audio use! Upping the quality will probably just influence to time it'll take to compute things! Amazing stuff!!!

                            ? 1 Reply Last reply Reply Quote 0
                            • ?
                              A Former User @A Former User
                              last edited by

                              @UrsBollhalder It was pretty fast, I used about 10 mins of samples, which I left running overnight so about 12-14 hours total for a relatively accurate resynthesis.

                              I think the 16k samplerate is for the actual training, once you start going to 44.1 or something higher like 96k the training would take a lot longer...

                              Probably the same thing for mono vs stereo (half the channels = half the time)

                              1 Reply Last reply Reply Quote 0
                              • First post
                                Last post

                              23

                              Online

                              1.8k

                              Users

                              12.0k

                              Topics

                              104.2k

                              Posts