Please ensure Javascript is enabled for purposes of website accessibility Jump to content

DunedinDragon

Members
  • Posts

    3,524
  • Joined

  • Last visited

  • Days Won

    100

Everything posted by DunedinDragon

  1. The only true way you can achieve the amp in the room feel is to do the same as is done with an orchestra or ensemble which is to capture the performance using three different types of mic setups just to capture the ambience of the room with near, wide and far microphones and blend them appropriately to create the room ambience. Fortunately there are convolution plugins like Spitfire Audio Essentials Reverb that can fully simulate the effect accurately at least as it would happen in a world class studio such as Air Studios in London. Seems a bit obsessive to me for just an electric guitar.
  2. Would you support charging a fee for a native Linux based HX Edit to offset the development costs of this FREE software? Or would you just prefer to spend other people's money for your personal benefit?
  3. DunedinDragon

    --

    That may be the case in non-professional, non-commercial deliveries. And I did refer to that in the second part of my statement, and that's why I spent a good portion of my career teaching organizations like those how to organize teams and the necessary cycle and critical guidelines of iterative development and delivery, of which actual code-work is about 1/3 of the process. Those organizations that don't follow those process and team elements generally have about an 80% chance of failure or rejection by end users. Sorry you seem to have been limited to working in one of those.....
  4. DunedinDragon

    --

    I can only say I will be eternally grateful you were never a developer on any of my projects given all your statements here. You're allowed your opinion, but certifying it with your claim of being a software developer given your obvious lack of technical knowledge regarding DSP real-time programming and the process used in professional team-based, versioned commercial delivery of products only serves to embarrass you further.....
  5. The MIDI required to make changes to a different preset and/or snapshot as well as enabling different stomp buttons and such is pretty well documented. A great source for such things would be the Helix Help web site as it has a section dedicated to the various MIDI actions available and the ways in which you use them. I will say, from someone that operates my Helix solely in conjunction with backing tracks you probably want to separate the MIDI interaction from the playing of the backing track because of timing constraints. It takes a moment or two for the Helix to change presets and be ready to be played whereas starting a backing track is pretty much an immediate thing. Your backing track player may provide a delay between the commands sent and starting the track, and if so that would probably work fine. In my case I use a dedicated MIDI footswitch controller where one switch sets up both the Helix and selects the backing track, and another footswitch starts the backing track. That way I know the Helix (and all other stage automation components) are ready before we start the song and it's worked flawlessly for many years now.
  6. Absolutely compared to other instrument plugins which are MIDI based. Consider drum plugins. The entire world of drum rhythms are available to you in addition to different types and sizes of drums for the kit you're using. The size, placement and mic positions on the drums are similar to what Helix Native does. The plug in drum rhythms that simplify the creation of the beat for the track is what something like Native doesn't have. In fact I can change the BPM without affecting any plugin tracks. That's where things like native require much more attention to get the track recorded. I can make a finished acoustic guitar track in no more than 20 minutes with something like Native Instruments Session Guitarist libraries. I could always route that into Helix Native in the DAW, but it would be nice if it were all one thing. As far as Basses I used to record those through my Helix with my precision bass, but anymore I just select the bass from any number of different bass plugins or even upright bass, or orchestral bowed bass and do them on the keyboard. It's MUCH faster. As far as effects, pretty much all DAWs come with plenty of options and there's more out there in plugin land. I just purchased a convolution reverb plugin that allows you to apply various mic'ing and environmental setups captured in some of the most famous location in the world like AIR Studios at Lyndhurst Hall in London and apply it to any type of instrument.
  7. I think what limits the appeal of Helix Native as well as many of it's other competitors is that it's so guitar centric which limits it's usefulness in today's plugin world. Even though I'm a guitar player I predominantly use a keyboard as my input tool when recording. I don't think I'm unusual in that respect given the number of people who have MIDI keyboards as a part of their recording setup. When you compare the workflow in Native to the workflow in more keyboard oriented guitar plugins it's easily three times effort involved in laying down a finished track in audio rather than MIDI. Plus in those type of plugins you have access to a wide variety of strumming patterns and other prepared riffs. That significantly limits the appeal for people that predominantly use keyboards in their workflow.
  8. As you can tell from the above responses, since Day One of Helix there have always been ways to achieve the 'amp in the room' effect one gets when using the Helix. The reason a lot of us don't really worry or care about that is that it never affects or is important to our audiences whether it's a live performance or a recorded performance. We can get close enough with the tools in the Helix to not worry about it and gain the simplified benefits it brings in both performance and recording. If it inspires your your performance then do it. But there is no magic formula to reproduce it because the 'amp in the room' effect changes dramatically with the room, the position your amp is in and your position relative to the amp. That's just physics.
  9. Not everything needs to be a snapshot and it sounds like this is a classic one. Just use a simple footswitch setup that you can momentarily do the boost.
  10. Personally I think most people overestimate the value of what generative AI can do when compared to actual human intellectual capabilities. Ultimately it depends on how well it's been trained in order to consider "out of the box" solutions to problems. Someone like Ben Adrian is unique in that regard so a system trained by Ben would likely exhibit more of those traits than one trained by a very strict "rules based" individual. I think we're quite a few years away from understanding how creativity works in humans before we'll be able to even approach considering how to design it into AI computers.
  11. First and foremost, the vast majority of my EQ comes from selecting and accurately configuring my cabinet, mic and mic positions on the new Helix cabs. The only EQ I ever apply is with a final EQ on the patch at the end of the signal chain if I need to take a little bit off of the highest portion of the frequency spectrum, but that's usually pretty minor like a high cut, but that's typically very slight. EQ'ing too much, especially when done in isolation from the rest of the instruments in the band, buries the guitar requiring you to overuse volume to be heard in the mix. If you want a real eye opener on how to appropriately EQ something, get yourself a copy of iZotope's Neutron 4 Elements and allow it to professionally and automatically adjust the EQ on all the individual various instruments being used in your song. You'll be shocked at how each instrument can be heard within the arrangement even if it's at a lower volume than some of the other instruments creating a much more professional sounding mix.
  12. I haven't been on a stage in years with a bass amp. If you have a Helix you already have several choices for a bass amp and speakers. All you need to do is plug it into a mixing board. That should save you all the room you need in your van. See how easy that was?
  13. This has nothing to do with the Helix and everything to do with the natural physics of the string vibrations produced with drop C tuning. Helix simply processes the inputs of what it receives. The recording sounds fine to me, but if you want it tighter the Helix can compensate for the tone change with a selective low cut. Personally I think you're obsessing over something not 10 people out of 100 would even notice.
  14. How exactly were you measuring your 0db output? Typically to measure the analog signal level output you would have to measure it on the input of a mixing board channel once it's sent out of the 1/4" or XLR output and even then it would depend on how the trim/gain knob was set on the mixing board and whether the signal is being sent as a line or mic level signal. Measuring it within the signal chain is merely measuring the digital signal representation of output because it only becomes analog once it goes through the D/A conversion and is sent out of an analog output whether that's going to a mixing board or to your Rolls personal monitor. If your Rolls is like most other powered speakers the 12 o'clock position is unity gain. Turning that up is like turning up the trim/gain knob on a mixing board.
  15. Yes...calmly go to his mixing board, identify the Trim/Gain knob at the top of your channel and point out which direction he should turn it to bring the signal level down. Tell him in the industry we all refer to that as "gain staging"......
  16. DSP isn't really an answer here because DSP is all about performing the mathematical transformations used in the process of modeling. All other functions are performed by normal built in computational functions. I experienced the same speedup as @rd2rk in MIDI functions when I went from from trying to send them in Helix to sending them from a dedicated MIDI foot controller, in my case a Morningstar MC8. It makes sense in that the modeling computations will always take priority on a Helix because that's it's main job and must be performed "real time". Sending MIDI is an additional duty unlike what you get in a dedicated MIDI controller.
  17. It's been almost a decade since I used an HD after I switched to the Helix, but I don't remember having a problem like you described on it. Yes there will be natural differences between amp models, but it only becomes a problem if you create signal chains that are unreasonably and unnecessarily loud in volume which causes inconsistencies between presets. The Helix does provide an inline real time representation of the signal level of your signal chain output level by simply selecting the output block of a preset. When I create a preset on the Helix I tend to keep that level at around 50 to 60% to ensure ample headroom on my XLR output. Generally my levels are only slightly more than the reading I get playing the preset with all my blocks turned off. In other words the natural output level of the Helix. Ultimately that physical output level is determined by the master output volume knob on the Helix where the signal gets converted from a digital signal which is used inside your signal chain to the analog signal that goes out to the rest of the world either to physical amp or mixing board. It's literally no different than if you had two physical amps and the louder amp were set so high in volume that a less loud amp couldn't match it. In my application I always go direct to the mixing board from the XLR output on the Helix. I use the setting on the Helix that disables my Helix volume knob and all signal levels are sent at full volume. By controlling the digital levels on my signal chain in my presets I can easily gain stage my presets coming into my mixing board with a single setting of the gain knob on the mixer. This sounds more complicated than it actually is but there are literally hundreds of YouTube videos that demonstrate how to properly gain stage the Helix output.
  18. Well you kind of need to make a choice. Do you want the amp models you use modeled authentically or conveniently? Plug into any two different amps with the same guitar at the same settings and there are going to be differences in tone and in volume. It's the same in a modeler if they're digitally representing the way an actual amp works. What differs on the Helix from the HD is the models have several extra and efficient ways to address these differences. The easiest way is changing the level up or down on the output block of the signal chain which doesn't affect the tone of the signal chain, just the output level. The amps themselves can be leveled without affecting the tone by adjusting the amps channel volume.
  19. You're comparing two very different things. Kind of like comparing a CPU to a GPU. DSP chips are designed for real time mathematical transformations and nothing else. Apple silicon is an all purpose Reduced Instruction Set Computing (RISC) chip with no specialization. Believe me if there were an advantage to it someone in the modeling world would have latched onto it to gain some advantage. But in a dedicated hardware unit it would be way out of its league.
  20. Ultimately this all comes down to the time it takes to unload the blocks in one preset and load the next set of blocks in the next preset. It affects all modelers that use higher definition DSP models. It doesn't affect the Kemper because it's technically a sampler/modeler and doesn't operate under the same rules. But everyone uses the same set of tricks if they're using higher definition models. The only time this affects most people is in the middle of the song that needs a dramatic change and that was the reason for snapshots. The delay is typically way less than a second so it doesn't usually affect people between songs when there's a natural break in the music. I know I never had a problem when I was using my HD system, but that was several DSP processor generations ago.
  21. Sorry to say but IMO if you need that much EQ there's something more fundamentally wrong in your signal chain such as the amp you've chosen, the guitar and settings you've chosen, the settings on the amp, the cabinet and mic'ng solution you've chosen...something along those lines. EQ's by their nature are a somewhat unnatural solution for fixing a fundamental signal chain flaw and that's why they squeeze the natural sound out of the signal. Over the last 9 years of using my Helix in thousands of recordings and live performances I rarely need anything more than a final finishing EQ with a couple of small tweaks. I can typically get closer to what I want just moving the mic placement or maybe a different mic worst case.
  22. Since subs generally only produce sound below 125 HZ, if you're playing guitar there's precious little benefit for you. Bass, keyboards and drums tend to get the most benefit down at those frequencies. From the audience perspective they probably wouldn't even notice you were using a sub.
  23. Like you I also use a number of Native Instrument and NI compatible libraries in my DAW. Personally I found this to be MUCH easier to simply define a MIDI Out track in my DAW (Ableton) which sends single midi automation requests to a Morningstar MC8 MIDI controller foot pedal which is configured with the correct MIDI sequence I want to send to my Helix. For example, in the MIDI out track of my DAW at the appropriate time in sync with the Ableton project it might send an outbound single MIDI CC to the Morningstar to execute the MIDI sequence on Button 1 which then sends both the PC and CC required by Helix for the change to a specific Helix preset. I simplify this by having pre-formatted MIDI stems for each MC8 foot pedal I want to have executed and just drop them into the MIDI out track at the appropriate point. So from my DAW I only see the small MIDI block and can position it wherever I wish. I have different banks of controls in the MC8 for each song project and that's what allows me to automate the Helix in time with the music. I've been doing it this way for years of performances and it works flawlessly.
  24. That's probably reasonably close to what the first generation of something like this might look like. But the only way for it to have any real value to the user would require some personalization in the training of the AI as to how his system is organized (type of guitar, signal output setup such as amps, direct to PA, etc) as well as personal preferences for the type of sound desired for each individual user as those are so unique in each situation because it's typically based on an emotional response to the result which AI is struggling to figure out how to do. Who knows what the pricing model for such a thing might be.
  25. Use this link as your step by step guide for learning how to setup the Stomp: Helix Help Make sure you scroll down to the section that deals specifically with the Stomp/Stomp XL
×
×
  • Create New...