-
Posts
3,512 -
Joined
-
Last visited
-
Days Won
100
Everything posted by DunedinDragon
-
This has nothing to do with the Helix and everything to do with the natural physics of the string vibrations produced with drop C tuning. Helix simply processes the inputs of what it receives. The recording sounds fine to me, but if you want it tighter the Helix can compensate for the tone change with a selective low cut. Personally I think you're obsessing over something not 10 people out of 100 would even notice.
-
How exactly were you measuring your 0db output? Typically to measure the analog signal level output you would have to measure it on the input of a mixing board channel once it's sent out of the 1/4" or XLR output and even then it would depend on how the trim/gain knob was set on the mixing board and whether the signal is being sent as a line or mic level signal. Measuring it within the signal chain is merely measuring the digital signal representation of output because it only becomes analog once it goes through the D/A conversion and is sent out of an analog output whether that's going to a mixing board or to your Rolls personal monitor. If your Rolls is like most other powered speakers the 12 o'clock position is unity gain. Turning that up is like turning up the trim/gain knob on a mixing board.
-
Yes...calmly go to his mixing board, identify the Trim/Gain knob at the top of your channel and point out which direction he should turn it to bring the signal level down. Tell him in the industry we all refer to that as "gain staging"......
-
DSP isn't really an answer here because DSP is all about performing the mathematical transformations used in the process of modeling. All other functions are performed by normal built in computational functions. I experienced the same speedup as @rd2rk in MIDI functions when I went from from trying to send them in Helix to sending them from a dedicated MIDI foot controller, in my case a Morningstar MC8. It makes sense in that the modeling computations will always take priority on a Helix because that's it's main job and must be performed "real time". Sending MIDI is an additional duty unlike what you get in a dedicated MIDI controller.
-
It's been almost a decade since I used an HD after I switched to the Helix, but I don't remember having a problem like you described on it. Yes there will be natural differences between amp models, but it only becomes a problem if you create signal chains that are unreasonably and unnecessarily loud in volume which causes inconsistencies between presets. The Helix does provide an inline real time representation of the signal level of your signal chain output level by simply selecting the output block of a preset. When I create a preset on the Helix I tend to keep that level at around 50 to 60% to ensure ample headroom on my XLR output. Generally my levels are only slightly more than the reading I get playing the preset with all my blocks turned off. In other words the natural output level of the Helix. Ultimately that physical output level is determined by the master output volume knob on the Helix where the signal gets converted from a digital signal which is used inside your signal chain to the analog signal that goes out to the rest of the world either to physical amp or mixing board. It's literally no different than if you had two physical amps and the louder amp were set so high in volume that a less loud amp couldn't match it. In my application I always go direct to the mixing board from the XLR output on the Helix. I use the setting on the Helix that disables my Helix volume knob and all signal levels are sent at full volume. By controlling the digital levels on my signal chain in my presets I can easily gain stage my presets coming into my mixing board with a single setting of the gain knob on the mixer. This sounds more complicated than it actually is but there are literally hundreds of YouTube videos that demonstrate how to properly gain stage the Helix output.
-
Well you kind of need to make a choice. Do you want the amp models you use modeled authentically or conveniently? Plug into any two different amps with the same guitar at the same settings and there are going to be differences in tone and in volume. It's the same in a modeler if they're digitally representing the way an actual amp works. What differs on the Helix from the HD is the models have several extra and efficient ways to address these differences. The easiest way is changing the level up or down on the output block of the signal chain which doesn't affect the tone of the signal chain, just the output level. The amps themselves can be leveled without affecting the tone by adjusting the amps channel volume.
-
You're comparing two very different things. Kind of like comparing a CPU to a GPU. DSP chips are designed for real time mathematical transformations and nothing else. Apple silicon is an all purpose Reduced Instruction Set Computing (RISC) chip with no specialization. Believe me if there were an advantage to it someone in the modeling world would have latched onto it to gain some advantage. But in a dedicated hardware unit it would be way out of its league.
-
Ultimately this all comes down to the time it takes to unload the blocks in one preset and load the next set of blocks in the next preset. It affects all modelers that use higher definition DSP models. It doesn't affect the Kemper because it's technically a sampler/modeler and doesn't operate under the same rules. But everyone uses the same set of tricks if they're using higher definition models. The only time this affects most people is in the middle of the song that needs a dramatic change and that was the reason for snapshots. The delay is typically way less than a second so it doesn't usually affect people between songs when there's a natural break in the music. I know I never had a problem when I was using my HD system, but that was several DSP processor generations ago.
-
Sorry to say but IMO if you need that much EQ there's something more fundamentally wrong in your signal chain such as the amp you've chosen, the guitar and settings you've chosen, the settings on the amp, the cabinet and mic'ng solution you've chosen...something along those lines. EQ's by their nature are a somewhat unnatural solution for fixing a fundamental signal chain flaw and that's why they squeeze the natural sound out of the signal. Over the last 9 years of using my Helix in thousands of recordings and live performances I rarely need anything more than a final finishing EQ with a couple of small tweaks. I can typically get closer to what I want just moving the mic placement or maybe a different mic worst case.
-
Like you I also use a number of Native Instrument and NI compatible libraries in my DAW. Personally I found this to be MUCH easier to simply define a MIDI Out track in my DAW (Ableton) which sends single midi automation requests to a Morningstar MC8 MIDI controller foot pedal which is configured with the correct MIDI sequence I want to send to my Helix. For example, in the MIDI out track of my DAW at the appropriate time in sync with the Ableton project it might send an outbound single MIDI CC to the Morningstar to execute the MIDI sequence on Button 1 which then sends both the PC and CC required by Helix for the change to a specific Helix preset. I simplify this by having pre-formatted MIDI stems for each MC8 foot pedal I want to have executed and just drop them into the MIDI out track at the appropriate point. So from my DAW I only see the small MIDI block and can position it wherever I wish. I have different banks of controls in the MC8 for each song project and that's what allows me to automate the Helix in time with the music. I've been doing it this way for years of performances and it works flawlessly.
-
That's probably reasonably close to what the first generation of something like this might look like. But the only way for it to have any real value to the user would require some personalization in the training of the AI as to how his system is organized (type of guitar, signal output setup such as amps, direct to PA, etc) as well as personal preferences for the type of sound desired for each individual user as those are so unique in each situation because it's typically based on an emotional response to the result which AI is struggling to figure out how to do. Who knows what the pricing model for such a thing might be.
-
Help! Need help connecting HX Stomp to Morningstar MC6 Pro
DunedinDragon replied to nickmorales17's topic in Helix
Use this link as your step by step guide for learning how to setup the Stomp: Helix Help Make sure you scroll down to the section that deals specifically with the Stomp/Stomp XL -
Output routing question for click track using Helix and Ableton Live 11
DunedinDragon replied to hoppmyers1's topic in Helix
A good resource for you might be Will Doggett. He's been doing YouTube videos for years on this subject and was really the person I spent a lot of time following when I began doing this stuff. Here's a link to a lot of his stuff: Will Doggett Link A specific link from Will Doggett for doing click tracks. -
Output routing question for click track using Helix and Ableton Live 11
DunedinDragon replied to hoppmyers1's topic in Helix
I use the same components as you do (Helix and Ableton) in pretty much the same situation but not nearly as complex or convoluted and have been doing it now for 4 years. My approach is pretty straightforward. In my case we have myself and two female vocalists. I play guitar as well as sing so I use my Helix for what it's designed to do primarily which is configure the sound of the guitar through presets and snapshots for each song and the changes that need to happen during the course of the song. This is all coordinated by a MIDI control track in Ableton which signals those changes through a Morningstar MC8 MIDI foot controller. I have no additional gear beyond the mixing board which combines the backing track, the Helix output and the vocals for the performance. There's really no need for a click track because the drum track coming from Ableton is more than sufficient just using count-ins as you would with a real drummer. The real star of the show is my Native Instruments library of virtual instruments which I use to construct the backing tracks in my studio and run them on a laptop during the performance which is all controlled and organized through my Morningstar MC8 controller where I select the song and start it. Everything else happens automatically without any interaction from me so I can just concentrate on the performance. This is basically modeled after the way Disney does it's live shows in their theme parks and on their cruise ships. In my case I construct all the instrument tracks in my studio using Native Instruments various libraries for things like drums, bass, keys, additional guitars, banjo, mandolin, string sections, horn sections, etc. These are all high quality instruments built from samples of the actual instruments so they're indistinguishable from having actual performers playing them on stage. But I can design them to sound the way I want them to sound and play what I want them to play. In addition to the instrument tracks I have one MIDI Out track in Ableton that signals the Morningstar MC8 when something needs to be changed during the course of the performance and signals itself to stop when the end of the song. Once I've built these tracks in Ableton's arrangement view, I convert all the musical output from my MIDI tracks to audio tracks which are then loaded into my laptop and run live in Ableton's session view and sent to the mixing board. So in essence those virtual musicians are handled in exactly the same way as if they were live performers. Here's an example of one of the backing tracks for the song "The Truth" by Megan Woods which we just started doing a couple of weeks ago. The Truth -
My general approach to using my Morningstar MC8 for controlling presets, snapshots and stomp is to reserve two of the top buttons (G and H) for scrolling through different presets by sending the specific PC and CC for the preset that's displayed when you display it on your Helix at the lower left side of the screen. From the Morningstar perspective that's two MIDI transactions (PC + CC). Typically I just use the CC msgs to simulate the pressing of a footswitch which are CC 40-58 (FS 1-11). That works for snapshots or footswitches or you can use the specific snapshot CC# of 69 with a value of 0 - 7. There are a couple of tricks you can use with the Morningstar such as defining the button as a toggle for footswitches defined as only on while holding them. I sometimes send two footswitch commands on one MC8 button to toggle between two different effects (one on, one off simultaneously). There's enough control in the MC controllers that I really don't press any foot controls on my Helix other than to tune or to use the volume pedal. In my case I automate all of my controls to the Helix synchronized with the playback of my backing tracks in Ableton which I control via the rest of the buttons on my MC8. I actually select the Ableton track using the G and H top buttons and that coordinates the track with the appropriate Helix preset. I then press the Start button (Morningstar A) which starts the tracks playing and a MIDI out track notifies the MC8 to what button to trigger at the appropriate time to make changes to the Helix. I can still execute those changes manually on the MC8, but I usually just leave it up to the MIDI control track in Ableton.
-
The best thing about AI at this point is by simply using those two letters in your marketing campaign you can potentially boost sales regardless of any difference it will make in the product's actual capabilities. There is clearly vast potential in AI all of which has to be run on massive, very expensive computers that won't fit in your house much less in a modeling box. It's also highly suspect when any company touts AI without qualifying what type of AI because they're all different. The only type that really would make sense in the modeling world would be Generative AI which relies on the computer being taught how to use different components correctly within a signal chain to resolve to a specified sample sound. Of course the accuracy of the sample sound would be suspect without knowing what kind of guitar setup was used to produce it. And the sample itself could vary wildly depending on the settings of the guitar. All of which can only happen by submitting each sample to a remote AI computer to be able to run it's program and determine the outcome. My best guess what Spark may be referring to would be to use AI to design the individual modeled components from the real world you can use in your modeler. But just imagine the level of bragging rights you'd have if you can tell people your modeler created the sound through AI even if it wasn't done that way. Isn't marketing a great invention???!!!
-
Helix Input and Output levels research (almost solved)
DunedinDragon replied to zolko60's topic in Helix
Because that's just the standard that tends to work well for me in the digital realm of the Helix signal chain without causing any problems and remaining consistent. The analog output level coming out of the Helix will be controlled at the mixing board and that will be whatever works best. Of course that's a very old post and I no longer go direct to a stage monitor. I just send my normalized analog output from the Helix to the mixing board and the levels are gain staged then directed to the appropriate aux mix for my stage monitor and for the front of house.- 22 replies
-
- input level
- output level
-
(and 2 more)
Tagged with:
-
Quite honestly that makes me suspicious of the skills of this sound guy, unless he's doing a submix from the main auxes into separate wireless IEM transmitters. Hopefully he got you guys setup correctly. But this would only be for his convenience in sending a stereo signal. Quite frankly I personally don't think he needs to encumber you with something he needs to complete his duties. If anyone should have some spare Aux out XLR cable splitters in his bag it should be him, not you. From your perspective it should all just be transparent and you do what you have always done and he can deal with getting the sound out appropriately.
-
I don't see any possible advantage to sending two signals rather than one in your case. It only complicates the live mix at the soundboard for no practical purpose. No one in the audience is going to know or care whether you use 1 or 2 channels.
-
First, your two audio samples are both the same file which is the AC15, but it's pretty much what I would expect from an AC15 with the settings you have on that amp. To my ear It's not terrible and there is some clarity to the tones until you dig in hard or play bigger chords, but it's a bit muffled to my taste lacking some high end (sparkle) but that may be due more to using an SM57 rather than a condenser or ribbon mic which is what I tend to use. Looking at your amp settings you have a decent drive level but the Master level is pretty high which forces amp stage distortion. I personally rarely mess with the Master Vol parameter once I get the effect I want with the Drive. I just adjust the Ch Vol to get the signal level (volume) I want without affecting the tone as the Master Vol does. For what it's worth I'm listening to the samples on my Yamaha HS7 studio monitors.
-
Recommend a pedal that would suit my setup?
DunedinDragon replied to joleneedmonson's topic in Helix
The Helix is a lot of things but it's not a replacement for a mixing board. Especially when you have two singers on separate mics that need to be gain staged and mixed individually along with a guitar. -
Thoughts/recommendations: Alway-on pedal in front of Helix for "feel"
DunedinDragon replied to benriddell's topic in Helix
Well, feel by definition is an emotional response. I "feel" I get everything I need just from the Helix alone. -
No. That just turns down the signal level going out. It doesn't bypass it.
-
I rarely use my Helix Floor footswitches as I can get everything I need from my Morningstar MC8. I've had this unit since 2015 and I can't say it's been as problematic as yours but some buttons always needed some attention to keep functioning well, and it's just as easy and more reliable to do everything over MIDI.