-
Posts
3,546 -
Joined
-
Last visited
-
Days Won
102
Everything posted by DunedinDragon
-
I can only say I will be eternally grateful you were never a developer on any of my projects given all your statements here. You're allowed your opinion, but certifying it with your claim of being a software developer given your obvious lack of technical knowledge regarding DSP real-time programming and the process used in professional team-based, versioned commercial delivery of products only serves to embarrass you further.....
-
The MIDI required to make changes to a different preset and/or snapshot as well as enabling different stomp buttons and such is pretty well documented. A great source for such things would be the Helix Help web site as it has a section dedicated to the various MIDI actions available and the ways in which you use them. I will say, from someone that operates my Helix solely in conjunction with backing tracks you probably want to separate the MIDI interaction from the playing of the backing track because of timing constraints. It takes a moment or two for the Helix to change presets and be ready to be played whereas starting a backing track is pretty much an immediate thing. Your backing track player may provide a delay between the commands sent and starting the track, and if so that would probably work fine. In my case I use a dedicated MIDI footswitch controller where one switch sets up both the Helix and selects the backing track, and another footswitch starts the backing track. That way I know the Helix (and all other stage automation components) are ready before we start the song and it's worked flawlessly for many years now.
-
Absolutely compared to other instrument plugins which are MIDI based. Consider drum plugins. The entire world of drum rhythms are available to you in addition to different types and sizes of drums for the kit you're using. The size, placement and mic positions on the drums are similar to what Helix Native does. The plug in drum rhythms that simplify the creation of the beat for the track is what something like Native doesn't have. In fact I can change the BPM without affecting any plugin tracks. That's where things like native require much more attention to get the track recorded. I can make a finished acoustic guitar track in no more than 20 minutes with something like Native Instruments Session Guitarist libraries. I could always route that into Helix Native in the DAW, but it would be nice if it were all one thing. As far as Basses I used to record those through my Helix with my precision bass, but anymore I just select the bass from any number of different bass plugins or even upright bass, or orchestral bowed bass and do them on the keyboard. It's MUCH faster. As far as effects, pretty much all DAWs come with plenty of options and there's more out there in plugin land. I just purchased a convolution reverb plugin that allows you to apply various mic'ing and environmental setups captured in some of the most famous location in the world like AIR Studios at Lyndhurst Hall in London and apply it to any type of instrument.
-
I think what limits the appeal of Helix Native as well as many of it's other competitors is that it's so guitar centric which limits it's usefulness in today's plugin world. Even though I'm a guitar player I predominantly use a keyboard as my input tool when recording. I don't think I'm unusual in that respect given the number of people who have MIDI keyboards as a part of their recording setup. When you compare the workflow in Native to the workflow in more keyboard oriented guitar plugins it's easily three times effort involved in laying down a finished track in audio rather than MIDI. Plus in those type of plugins you have access to a wide variety of strumming patterns and other prepared riffs. That significantly limits the appeal for people that predominantly use keyboards in their workflow.
-
As you can tell from the above responses, since Day One of Helix there have always been ways to achieve the 'amp in the room' effect one gets when using the Helix. The reason a lot of us don't really worry or care about that is that it never affects or is important to our audiences whether it's a live performance or a recorded performance. We can get close enough with the tools in the Helix to not worry about it and gain the simplified benefits it brings in both performance and recording. If it inspires your your performance then do it. But there is no magic formula to reproduce it because the 'amp in the room' effect changes dramatically with the room, the position your amp is in and your position relative to the amp. That's just physics.
-
Not everything needs to be a snapshot and it sounds like this is a classic one. Just use a simple footswitch setup that you can momentarily do the boost.
-
Personally I think most people overestimate the value of what generative AI can do when compared to actual human intellectual capabilities. Ultimately it depends on how well it's been trained in order to consider "out of the box" solutions to problems. Someone like Ben Adrian is unique in that regard so a system trained by Ben would likely exhibit more of those traits than one trained by a very strict "rules based" individual. I think we're quite a few years away from understanding how creativity works in humans before we'll be able to even approach considering how to design it into AI computers.
-
First and foremost, the vast majority of my EQ comes from selecting and accurately configuring my cabinet, mic and mic positions on the new Helix cabs. The only EQ I ever apply is with a final EQ on the patch at the end of the signal chain if I need to take a little bit off of the highest portion of the frequency spectrum, but that's usually pretty minor like a high cut, but that's typically very slight. EQ'ing too much, especially when done in isolation from the rest of the instruments in the band, buries the guitar requiring you to overuse volume to be heard in the mix. If you want a real eye opener on how to appropriately EQ something, get yourself a copy of iZotope's Neutron 4 Elements and allow it to professionally and automatically adjust the EQ on all the individual various instruments being used in your song. You'll be shocked at how each instrument can be heard within the arrangement even if it's at a lower volume than some of the other instruments creating a much more professional sounding mix.
-
I haven't been on a stage in years with a bass amp. If you have a Helix you already have several choices for a bass amp and speakers. All you need to do is plug it into a mixing board. That should save you all the room you need in your van. See how easy that was?
-
This has nothing to do with the Helix and everything to do with the natural physics of the string vibrations produced with drop C tuning. Helix simply processes the inputs of what it receives. The recording sounds fine to me, but if you want it tighter the Helix can compensate for the tone change with a selective low cut. Personally I think you're obsessing over something not 10 people out of 100 would even notice.
-
How exactly were you measuring your 0db output? Typically to measure the analog signal level output you would have to measure it on the input of a mixing board channel once it's sent out of the 1/4" or XLR output and even then it would depend on how the trim/gain knob was set on the mixing board and whether the signal is being sent as a line or mic level signal. Measuring it within the signal chain is merely measuring the digital signal representation of output because it only becomes analog once it goes through the D/A conversion and is sent out of an analog output whether that's going to a mixing board or to your Rolls personal monitor. If your Rolls is like most other powered speakers the 12 o'clock position is unity gain. Turning that up is like turning up the trim/gain knob on a mixing board.
-
Yes...calmly go to his mixing board, identify the Trim/Gain knob at the top of your channel and point out which direction he should turn it to bring the signal level down. Tell him in the industry we all refer to that as "gain staging"......
-
DSP isn't really an answer here because DSP is all about performing the mathematical transformations used in the process of modeling. All other functions are performed by normal built in computational functions. I experienced the same speedup as @rd2rk in MIDI functions when I went from from trying to send them in Helix to sending them from a dedicated MIDI foot controller, in my case a Morningstar MC8. It makes sense in that the modeling computations will always take priority on a Helix because that's it's main job and must be performed "real time". Sending MIDI is an additional duty unlike what you get in a dedicated MIDI controller.
-
It's been almost a decade since I used an HD after I switched to the Helix, but I don't remember having a problem like you described on it. Yes there will be natural differences between amp models, but it only becomes a problem if you create signal chains that are unreasonably and unnecessarily loud in volume which causes inconsistencies between presets. The Helix does provide an inline real time representation of the signal level of your signal chain output level by simply selecting the output block of a preset. When I create a preset on the Helix I tend to keep that level at around 50 to 60% to ensure ample headroom on my XLR output. Generally my levels are only slightly more than the reading I get playing the preset with all my blocks turned off. In other words the natural output level of the Helix. Ultimately that physical output level is determined by the master output volume knob on the Helix where the signal gets converted from a digital signal which is used inside your signal chain to the analog signal that goes out to the rest of the world either to physical amp or mixing board. It's literally no different than if you had two physical amps and the louder amp were set so high in volume that a less loud amp couldn't match it. In my application I always go direct to the mixing board from the XLR output on the Helix. I use the setting on the Helix that disables my Helix volume knob and all signal levels are sent at full volume. By controlling the digital levels on my signal chain in my presets I can easily gain stage my presets coming into my mixing board with a single setting of the gain knob on the mixer. This sounds more complicated than it actually is but there are literally hundreds of YouTube videos that demonstrate how to properly gain stage the Helix output.
-
Well you kind of need to make a choice. Do you want the amp models you use modeled authentically or conveniently? Plug into any two different amps with the same guitar at the same settings and there are going to be differences in tone and in volume. It's the same in a modeler if they're digitally representing the way an actual amp works. What differs on the Helix from the HD is the models have several extra and efficient ways to address these differences. The easiest way is changing the level up or down on the output block of the signal chain which doesn't affect the tone of the signal chain, just the output level. The amps themselves can be leveled without affecting the tone by adjusting the amps channel volume.
-
You're comparing two very different things. Kind of like comparing a CPU to a GPU. DSP chips are designed for real time mathematical transformations and nothing else. Apple silicon is an all purpose Reduced Instruction Set Computing (RISC) chip with no specialization. Believe me if there were an advantage to it someone in the modeling world would have latched onto it to gain some advantage. But in a dedicated hardware unit it would be way out of its league.
-
Ultimately this all comes down to the time it takes to unload the blocks in one preset and load the next set of blocks in the next preset. It affects all modelers that use higher definition DSP models. It doesn't affect the Kemper because it's technically a sampler/modeler and doesn't operate under the same rules. But everyone uses the same set of tricks if they're using higher definition models. The only time this affects most people is in the middle of the song that needs a dramatic change and that was the reason for snapshots. The delay is typically way less than a second so it doesn't usually affect people between songs when there's a natural break in the music. I know I never had a problem when I was using my HD system, but that was several DSP processor generations ago.
-
Sorry to say but IMO if you need that much EQ there's something more fundamentally wrong in your signal chain such as the amp you've chosen, the guitar and settings you've chosen, the settings on the amp, the cabinet and mic'ng solution you've chosen...something along those lines. EQ's by their nature are a somewhat unnatural solution for fixing a fundamental signal chain flaw and that's why they squeeze the natural sound out of the signal. Over the last 9 years of using my Helix in thousands of recordings and live performances I rarely need anything more than a final finishing EQ with a couple of small tweaks. I can typically get closer to what I want just moving the mic placement or maybe a different mic worst case.
-
Like you I also use a number of Native Instrument and NI compatible libraries in my DAW. Personally I found this to be MUCH easier to simply define a MIDI Out track in my DAW (Ableton) which sends single midi automation requests to a Morningstar MC8 MIDI controller foot pedal which is configured with the correct MIDI sequence I want to send to my Helix. For example, in the MIDI out track of my DAW at the appropriate time in sync with the Ableton project it might send an outbound single MIDI CC to the Morningstar to execute the MIDI sequence on Button 1 which then sends both the PC and CC required by Helix for the change to a specific Helix preset. I simplify this by having pre-formatted MIDI stems for each MC8 foot pedal I want to have executed and just drop them into the MIDI out track at the appropriate point. So from my DAW I only see the small MIDI block and can position it wherever I wish. I have different banks of controls in the MC8 for each song project and that's what allows me to automate the Helix in time with the music. I've been doing it this way for years of performances and it works flawlessly.
-
That's probably reasonably close to what the first generation of something like this might look like. But the only way for it to have any real value to the user would require some personalization in the training of the AI as to how his system is organized (type of guitar, signal output setup such as amps, direct to PA, etc) as well as personal preferences for the type of sound desired for each individual user as those are so unique in each situation because it's typically based on an emotional response to the result which AI is struggling to figure out how to do. Who knows what the pricing model for such a thing might be.
-
Help! Need help connecting HX Stomp to Morningstar MC6 Pro
DunedinDragon replied to nickmorales17's topic in Helix
Use this link as your step by step guide for learning how to setup the Stomp: Helix Help Make sure you scroll down to the section that deals specifically with the Stomp/Stomp XL -
Output routing question for click track using Helix and Ableton Live 11
DunedinDragon replied to hoppmyers1's topic in Helix
A good resource for you might be Will Doggett. He's been doing YouTube videos for years on this subject and was really the person I spent a lot of time following when I began doing this stuff. Here's a link to a lot of his stuff: Will Doggett Link A specific link from Will Doggett for doing click tracks. -
Output routing question for click track using Helix and Ableton Live 11
DunedinDragon replied to hoppmyers1's topic in Helix
I use the same components as you do (Helix and Ableton) in pretty much the same situation but not nearly as complex or convoluted and have been doing it now for 4 years. My approach is pretty straightforward. In my case we have myself and two female vocalists. I play guitar as well as sing so I use my Helix for what it's designed to do primarily which is configure the sound of the guitar through presets and snapshots for each song and the changes that need to happen during the course of the song. This is all coordinated by a MIDI control track in Ableton which signals those changes through a Morningstar MC8 MIDI foot controller. I have no additional gear beyond the mixing board which combines the backing track, the Helix output and the vocals for the performance. There's really no need for a click track because the drum track coming from Ableton is more than sufficient just using count-ins as you would with a real drummer. The real star of the show is my Native Instruments library of virtual instruments which I use to construct the backing tracks in my studio and run them on a laptop during the performance which is all controlled and organized through my Morningstar MC8 controller where I select the song and start it. Everything else happens automatically without any interaction from me so I can just concentrate on the performance. This is basically modeled after the way Disney does it's live shows in their theme parks and on their cruise ships. In my case I construct all the instrument tracks in my studio using Native Instruments various libraries for things like drums, bass, keys, additional guitars, banjo, mandolin, string sections, horn sections, etc. These are all high quality instruments built from samples of the actual instruments so they're indistinguishable from having actual performers playing them on stage. But I can design them to sound the way I want them to sound and play what I want them to play. In addition to the instrument tracks I have one MIDI Out track in Ableton that signals the Morningstar MC8 when something needs to be changed during the course of the performance and signals itself to stop when the end of the song. Once I've built these tracks in Ableton's arrangement view, I convert all the musical output from my MIDI tracks to audio tracks which are then loaded into my laptop and run live in Ableton's session view and sent to the mixing board. So in essence those virtual musicians are handled in exactly the same way as if they were live performers. Here's an example of one of the backing tracks for the song "The Truth" by Megan Woods which we just started doing a couple of weeks ago. The Truth -
My general approach to using my Morningstar MC8 for controlling presets, snapshots and stomp is to reserve two of the top buttons (G and H) for scrolling through different presets by sending the specific PC and CC for the preset that's displayed when you display it on your Helix at the lower left side of the screen. From the Morningstar perspective that's two MIDI transactions (PC + CC). Typically I just use the CC msgs to simulate the pressing of a footswitch which are CC 40-58 (FS 1-11). That works for snapshots or footswitches or you can use the specific snapshot CC# of 69 with a value of 0 - 7. There are a couple of tricks you can use with the Morningstar such as defining the button as a toggle for footswitches defined as only on while holding them. I sometimes send two footswitch commands on one MC8 button to toggle between two different effects (one on, one off simultaneously). There's enough control in the MC controllers that I really don't press any foot controls on my Helix other than to tune or to use the volume pedal. In my case I automate all of my controls to the Helix synchronized with the playback of my backing tracks in Ableton which I control via the rest of the buttons on my MC8. I actually select the Ableton track using the G and H top buttons and that coordinates the track with the appropriate Helix preset. I then press the Start button (Morningstar A) which starts the tracks playing and a MIDI out track notifies the MC8 to what button to trigger at the appropriate time to make changes to the Helix. I can still execute those changes manually on the MC8, but I usually just leave it up to the MIDI control track in Ableton.