-
Posts
417 -
Joined
-
Last visited
-
Days Won
40
Everything posted by craiganderton
-
I have two favorite "alternate wah" approaches: Parallel a parametric EQ with an out-of-phase path. Then, everything cancels except for the peak created by the parametric, which is the type of curve most wahs have. You can vary the parametric's Q and frequency range to emulate the response of just about any wah pedal, then add distortion to taste. Sweep three hi-Q bandpass filters simultaneously. The filters are in parallel with an out-of-phase path, for the same reasons given above. They're offset so that if one is at (for example) 500 Hz, the others are at 1.5 kHz and 3 kHz. They maintain the same frequency relationship to each other as you move the pedal. (This is the basis for the pseudo-talk box preset in The Big Book of Helix Tips and Tricks - if you have the eBook, it's on page 198, and the "Talk Box.hlx" preset is in the Free Files folder.)
-
Check out this article I wrote about recording vocals at home. It covers a lot of topics other than EQ, but scroll down to section (5), which covers EQ specifically. This article is shorter, and focuses how to make a vocal channel strip with EQ and compression. The examples are based on the PreSonus Fat Channel, but the settings translate to the Helix processors...3 dB of boost is 3 dB of boost, no matter what company made the filter :) FWIW I think the Helix processors are underrated for applications other than guitar. You can get some great effects for vocals, keyboards, and drums. Helix Native gets a lot of use with my computer's DAW for audio other than guitars. For voice, probably the biggest limitation compared to dedicated vocal processors is that the latter specialize in creating convincing harmonies.
-
FWIW I use several microphones with my audio interface, and the SM58 had to be turned up the most. I got a Cloudlifter CL-1 for a ribbon mic, but tried it out with the SM58 and it makes a big difference with that as well. The CL-1 is a phantom-powered FET preamp with 25 dB of gain, and also reduces loading with dynamic mics. However, if you can get enough gain out of the Helix, the CL-1 probably overkill because it costs $150.
-
One way I save CPU power is with EQ-based cabs. Also, for some sounds, you can get away with using a preamp instead of Amp when you make a custom cab with EQ.
-
+1 on the comments about using the channel volume and being aware of the Fletcher-Munson curve. However, there's still the matter of coming up with a way to balance the levels to a consistent standard. Although you can only set levels properly at a gig while playing at live performance levels with a band, how long it takes to set that level properly matters. I want all my presets to have the same perceived volume level for two reasons: In the studio, when switching among presets, you don't fall into "the louder one sounds better syndrome," and you don't have to spend time tweaking levels to do comparisons. For live performance, you have a baseline level. With a standardized level, hopefully each preset will require only a modest adjustment to be a little louder or softer, as needed. As to how to balance the levels, here's an excerpt from The Big Book of Helix Tips and Tricks. It's one of the more advanced topics, but I hope it helps: How to Level Presets People have different attitudes about how (or even whether) to level presets. There are four complications when trying to set consistent levels: A sound’s perceived level can be different from its measured level. A brighter sound may measure as softer, but be perceived as louder because it has energy where our ears are most sensitive. You want some sounds to have a louder perceived level than others. With live performance, different presets will have different perceived levels depending on room acoustics, the size of the audience, and the music you’re playing. Presets that sound good in a home studio over monitors may not work well for live performance. When using presets onstage, the only way to guarantee setting the right levels is to adjust them while playing live, in context. Regardless, having a consistent, baseline level speeds up the tweaking process. Here’s an analogy. When adjusting a pickup’s pole pieces, I screw them all in halfway. Then if a string needs to be louder or softer, I can adjust the pole pieces as needed. If they’d all been screwed out, I couldn’t make them louder. If they’d been screwed all the way in, I couldn’t make them softer. It’s easier to tweak preset levels if they’re already close to what you want. The following is intended for those who are familiar with recording, editing, or mastering audio. A Partial Solution, Borrowed from Mastering Engineers A measurement protocol called LUFS (Loudness Unit Full Scale) measures perceived loudness, not absolute loudness. The origin story (every superhero has an origin story, right?) is that the European Broadcast Union (EBU) had enough of mastering engineers making CDs as loud as possible, in their quest to win “the loudness wars.” LUFS measurements allow streaming services like YouTube, Spotify, Apple Music, and others to adjust the volume of various songs to the same perceived level. So, you don’t have to change the volume for every song in a playlist—that Belgian hardcore techno cut from 1998 sounds like it’s the same level as Billie Eilish. The system isn’t perfect, but it’s better than dealing with constant level variations. LUFS meters measure perceived levels. Some DAWs include LUFS meters. Third-party plug-in meters include Waves WLM, or the free Youlean Loudness Meter. The goal with leveling presets is for their outputs to have the same LUFS reading. This is not a panacea! You will almost certainly need to tweak output levels for specific performance situations and musical material. Having a standard output level simply makes tweaking easier, because you’ve established a standard. Presets can be either louder or softer than the standard. Setting the Output Level If you don’t care about whether the preset has the same perceived level as a dry guitar (you don’t need to), adjust the output to whatever sounds right. However, comparing the processed sound to the bypassed sound is a useful baseline. Creating consistent levels is easiest to do with a computer and Helix Native. Next best is playing guitar as consistently as possible through Helix, into a computer with a DAW or plug-in host that can load an LUFS meter. Here’s how to do it: 1. Set the Helix Native input and output levels to 0.0, and don’t touch them. You’ll make any needed input or output level adjustments in the preset itself. 2. Record a 15-30 second or so clip of bypassed guitar playing chords, without any major pauses, and another clip with 15-30 seconds of single notes, also without major pauses. For bass presets, record some bass lines. 3. Insert an LUFS meter after Helix Native. If you expect to use a preset with chords, loop the chord clip at least twice with Helix bypassed, and check the LUFS reading. After enabling Helix, play the same loop at least twice, and adjust levels within the preset to hit the same target LUFS reading. 4. Use your ears to do any final output level edits, based on the musical context. Tip: Most LUFS meters measure the instantaneous LUFS level as well as an average level over time. If you play the loop through a few times, the average level will settle to a final value, whether bypassed or through an effect. This is the reading you want to use, not the instantaneous one. Note that this isn’t an exact science. The object is for your presets to have a standard, baseline level. That way, when you get to the gig, massive tweaking probably won’t be needed.
-
To me, acoustic and electric guitars might as well be different instruments. I play differently on them, I write different types of songs on them, and the feel & vibe are different. For emulating the sound on a gig, that's one thing...but in the studio, they're as different to the way I think as pianos and synths. One person's opinion, of course. But even with electric guitars, they feel like they have different personalities. What I play when I pick up a Variax or PRS or Tele or whatever varies. It's not about the sound so much as the feel. For example, the neck scale makes a difference in playing and tone. I love 'em all :)
-
If you're talking about sounds obtained in the studio, perhaps they're not using real-time pitch shifting. For example, Waves has a pitch-shifting plug-in that sounds really good, but with 150 ms or so of latency, it's intended only for offline processing while mixing. Some DAWs have a real-time pitch option for "proofing" a song, but then you need to do an offline, non-real-time bounce to unlock the best possible fidelity.
-
One more thing as you develop patches for live use - mono only, and remember Fletcher-Munson. When playing loud, bass and treble will be perceived as louder. This means mud and harshness, so when listening at reasonable levels in the studio, be shy about the bass and treble. They’ll get turned up anyway.
-
Not quite sure I understand the question, but if you disconnect the power source (like a barrier strip), then you don't need to turn off the HX Stomp switch. I plug the HX Stomp AC adapter into a barrier strip, and leave the HX Stomp power switch on. Then, turning off the barrier strip turns off power to the transformer and to the unit itself. It's considered good practice not to leave AC adapters plugged in when they're not powering anything. Hope this answers your question...
- 1 reply
-
- 1
-
The input level tip is a good one. Also try different cabs and mics. Some of the mics have a much more pronounced bass response, some are thinner. Moving the mic closer to the virtual speaker increases bass as well, in most cases. I often roll off the highs somewhat before feeding an amp. Distorting high frequencies creates even higher frequencies, which may account for some of the fizz and thinness. Bassbene is 100% right about the modelling world in general not being plug-and-play. It's more like trying to get a good guitar sound in the studio as opposed to just plugging into an amp and rockin.' But, the sounds you want are in there somewhere! You just have to find them.
-
Why has guitar innovation lagged behind everything else?
craiganderton replied to ichasedx's topic in Helix
Actually, even with 5-pin DIN MIDI's speed isn't the issue, it's the time required to analyze a string's vibrations and convert it to MIDI data. This is why the low E sounds so much more delayed than playing high up on the next. Higher-speed transports, like USB, can't solve that particular bottleneck. As to why the Seaboard feels expressive, there are several reasons: The sensors go beyond on-off switches, and respond to pressure and movement They don't have to convert a frequency to MIDI data (which is what bogs down MIDI guitar), only controller motion It's USB-only, which is essential when you're trying to stuff huge amounts of polyphonic data into a computer. -
Why has guitar innovation lagged behind everything else?
craiganderton replied to ichasedx's topic in Helix
It's definitely easier to shove MIDI data down a cable compared to audio. The signal levels are much higher, and crosstalk isn't as much of an issue because dynamics info is embedded in the data, rather than an amplitude change with audio. Now, if MIDI really could reproduce all the nuances of playing guitar, and there was a guitar synth at the other end that could translate these nuances to sound, then it would be easier to process the synth sound than the raw audio. Ultimately, I do think the end run will be a controller that plays like a guitar, feels mostly like a guitar, and can translate nuances to data, but is not a retrofitted version of a conventional guitar. From a physics standpoint, conventional guitars just weren't designed for this kind of application. The Jammy-E guitar's technology is very promising, and it was a finalist in the 2021 MIDI Innovation Awards. Sadly, the company making it is (was) located in Ukraine, and I don't have to tell you the rest of that story :( However, the founder has been in contact with The MIDI Association, whose members are trying to help. They hope to find a way to keep the technology alive. -
Why has guitar innovation lagged behind everything else?
craiganderton replied to ichasedx's topic in Helix
Re hex pickups, being able to process individual strings is really cool...in theory. At one point Brian Hardgroove from Public Enemy and I had a band (EV2) that was just him on drums and me on the Gibson HD.6X Pro hex guitar. I had octave dividers on the lower strings so I could get bass along with chords, and standard magnetic pickups for doing leads. So it sounded like a lot more than two people. (Eddie Kramer called us "The Black and White Stripes," which I thought was kind of funny - there's a one-minute EV2 clip on YouTube, although it was from a fan's camcorder so you can't really hear the guitar's bass.) The guitar didn't sell well, although some subsequent Gibson high-tech guitars that sold reasonably well, like the Dark Fire, had hex pickups. However, there are several problems inherent in hex pickups: Crosstalk. It's very difficult to get decent isolation between strings and maintain audio fidelity. So if you're distorting a string, you're still getting intermodulation distortion from adjacent strings. The high and low E have less crosstalk, so they sound "different" than the four middle strings. Dealing with all those outputs. You can't run six audio lines out of a guitar without going insane. Gibson used a modified Ethernet-type connection (similar to the Variax), so the cable itself was svelte. The strings were multiplexed digitally. But this brings us to the next problem... Where do the outputs go. The HD.6X Pro cable terminated in a breakout box with 6 audio outputs. With EV2, these went into an audio interface with 6 audio inputs. All the processing had to be done in the computer, and I used six instances of AmpliTube. But changing presets and such for six channels was a real hassle, and with the laptops of those days, six amp sims just about brought the computer to its knees. I depended on running the magnetic pickups through a multieffects to create different sounds. Gibson's later polyphonic guitars multiplexed the audio down a standard stereo 1/4" cable that terminated in a Firewire interface. That eliminated the breakout box and all the audio connections, but...the interface was Firewire (which is pretty much dead at this point), and had a fixed sample rate of 48 kHz. So if you were into using 44.1 kHz, you had to do a sample rate conversion. Then there was the issue of not being able to aggregate it with other interfaces on Windows, and the last driver IIRC was for Vista. The hassles involved in poly guitar are what led me to using multiband processing, which avoided a lot of the problems and still sounds pretty cool. I would assume someone with schematics and a sense of adventure could create a box that would create six separate outputs from a Variax, but again, now that you have the six outputs, you have to process them...so you'd need a Helix with six independent parallel paths, or a way to get those outputs into a computer. -
Why has guitar innovation lagged behind everything else?
craiganderton replied to ichasedx's topic in Helix
A good analogy is cars. These days, you turn the key, press the accelerator, it starts, and then where it goes is up to you. But you no longer have to get out your feeler gauge, adjust the points, set the valve clearance, rotate the carburetor to optimize the timing, check the oil, and then decide where you want to go. -
Why has guitar innovation lagged behind everything else?
craiganderton replied to ichasedx's topic in Helix
Once I was consulting to Sony, and they asked me to explain how to transfer a finished file from Acid to their MP3 player using Windows, so they could publish it online. They said "you can do it in 3 steps, it will take only a few minutes to write it up." I submitted a document with 20+ steps. They were really upset. "We said to keep it short!!" So I said, "Tell me what steps I can take out, and I'll revise it. No charge." Of course, every step was required. As soon as you get a computer involved - yes, Mac or Windows - matters get complicated. Although I agree all this stuff isn't rocket science, and with a little time spent learning how the MIDI protocol works, it becomes clear. But, matters could be simplified considerably. This is something where AI may be invaluable. -
Why has guitar innovation lagged behind everything else?
craiganderton replied to ichasedx's topic in Helix
MIDI's framework is almost 40 years old. Fortunately, it was designed for a long life, which is why people haven't had to replace their MIDI 1.0 gear, and their synthesizer from over three decades ago can talk to a 2022 model DAW. In the world of computer standards, that's nothing short of miraculous. The early speed limitations no longer exist for devices that use MIDI over USB (or other computer protocols) instead of the original 5-pin DIN connectors. However, you can even get DIN <> USB adapters. The MIDI 2.0 spec has been approved, but it will take a while before gear rolls out that implements its features. Apple includes MIDI 2.0 support in their current operating system, which is great, because people can code things that can talk to it. I'm sure Microsoft isn't too far behind. Much of MIDI 2.0 is about making gear easier to use. MIDI 1.0 was a monologue - instruments could either talk or receive. MIDI 2.0 is a dialogue, so gear can query other gear about its capabilities, and configure itself accordingly. That is a HUGE difference. For example, if you plug a MIDI 2.0 guitar into a MIDI 2.0 tone module, the tone module would recognize a MIDI device, inquire about its capabilities, find out it was a guitar (maybe even one from a specific manufacturer with specific features), and configure itself where each string went to its own voice, assigned pitch bend to individual strings, initiated legato mode, etc. - sort of "MIDI guitar's greatest hits." Of course, there would be options if you wanted to dig deeper. A control surface could control a mixer once it knew it was controlling a mixer, or control a virtual instrument once it knew it was controlling a virtual instrument. Or, a computer can ask a synthesizer what its parameters are, and display them onscreen. This would eliminate the need for editor/librarian software. When will this happen? I don't know. There are a lot of moving parts that all need to talk to each other. I've seen prototypes of some gear with MIDI 2.0 functionality. Probably what will happen first is MIDI 2.0 features being retrofitted to MIDI 1.0 gear as firmware updates. Also remember that MIDI 2.0 is completely backward-compatible with MIDI 1.0. Before someone chimes in with "yeah, sure, I've heard that before," here's why. MIDI is a language, and MIDI 2.0 simply increases its vocabulary. The first thing a MIDI 2.0 piece of gear does is ask connected gear whether it's MIDI 2.0 or not. If yes, it speaks MIDI 2.0. If not, it falls back to speaking MIDI 1.0. Anyway, it's also possible the "guitar of the future" might not be a guitar per se, but due to advances in sensors and materials, would feel and respond like a guitar, and use the same muscle memory so you wouldn't have to learn new techniques if you didn't want to. But it could track better, never go out of tune, and who knows...it might be able to do something like respond to how hard you press on a "string" after fretting it. -
Helix HX has a ton of distortion effects. You can dial one of those in to be more amp-like, then use the parametric EQ to shape a cabinet-like response (roll off highs, upper mid boost, etc.). The sound won't be exactly like any particular amp, but for rehearsals and even for performance, it will do the job. I even have some presets like that in my Helix floor for some alternative amp sounds.
-
FYI - Article that Explains Bias, Bias X, Sag, Hum, and Ripple
craiganderton posted a topic in Helix
The article is called Understanding Helix Amp Parameters, and it's posted on inSync. The article also explains ways to hear how changing parameters changes the sound - for example, how to use the 4 OSC Generator to hear how changing Bias affects the distortion characteristics. Another part tells how to hear sag in isolation so you can compare the sag of different amps easily. I hope you find the info useful! -
If you have a registered Helix (any flavor), Helix Native is currently on sale for $69.99 from Line 6 or Sweetwater.
-
First you need to have Cakewalk recognize your interface. Choose Edit > Preferences. Under Audio, Playback and Recording, choose ASIO as the driver mode. Then go to Audio, Devices and check the inputs and outputs you want to use. If you're not sure, just check anything that's not grayed out. Audio, Driver Settings has several useful options, like setting the Sample Rate and latency amount. However, you should also see a button that says ASIO Panel. Clicking this will call up your interface's application for adjusting particular interface settings. Now that Cakewalk hopefully knows how to talk to your interface, create a track. From the browser, drag Helix Native into the effects bin (or click on the FX bin + sign and navigate to it in your list of plug-ins). Then, set the input to the input you're using on your audio interface for your guitar. Finally, check the little box outlined in red. This enables input monitoring so you hear your guitar through Helix Native, instead of just hearing the dry guitar at your interface input. Hope this helps...!
-
What are the blocks that follow the Double Take? If you remove them, does the problem go away?
-
First of all, I'm glad you find the book useful! Second, if you want to contribute any of your tips for a future revision, drop me an email at the support email given in the book. Of course, you'll get credit for anything that's used. I have to say that the suggestions people have sent me are very useful. I incorporated a lot of them in v1.1 and am keeping track of ones for v1.2. I really like the collaborative feel of doing a book this way.
-
Hmmm...now you've inspired me to want to try the same thing with my Rickenbacker's Rick-O-Sound outputs. Cool!
-
Another advantage of PDFs is that you can annotate them with the equivalent of "sticky" notes. For example, if you search on "Marshall," try them all out, and particularly like a couple of them, you can save them as favorites and also take notes on the manual, like "best for leads with Strat" or "best for rhythm guitar with PRS" or whatever. And while not as important with Helix's compact interface, with something like a DAW screenshot, you can expand the image to the original resolution and see all the tiny text clearly.