Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


DunedinDragon last won the day on January 22

DunedinDragon had the most liked content!

1 Follower

About DunedinDragon

Contact Methods

  • Website URL

Profile Information

  • Gender
    Not Telling
  • Location
    Dunedin, FL
  • Interests
    Gear: Helix, Yamaha DXR12, Les Paul Standard, American Strat with Lace Sensor pickups, Gretsch Silver Falcon, Epiphone Sheraton II Pro
  • Registered Products

Recent Profile Visitors

3,088 profile views

DunedinDragon's Achievements

Grand Master

Grand Master (14/14)

  • Dedicated Rare
  • One Year In
  • One Month Later
  • Week One Done
  • Reacting Well Rare

Recent Badges




Community Answers

  1. I think we would need to know how you're trying to apply this, whether it's through a PA or are you envisioning going direct from your Helix. Typically something like that is connected to the outputs of the PA and provides low frequency emphasis across all channels that include frequencies below a certain range such as 125 Hz. It then serves as a crossover to the main PA speakers for all frequencies above that. Unless you're playing bass you're probably limited in frequencies that low. Going direct to the sub from the Helix wouldn't be any different other than you'd want to send it as a Line level signal and just have it do the same as if you were a PA mixing board and forward all frequencies above that level to whatever other device you're using for the full range of frequencies. Again, unless it's a bass guitar you won't get much of anything from a typical guitar setup. Many soundmen often use a high pass filter set to 125 or 150 Hz on any channels other than bass, keyboards, kick drum and maybe toms so the sub doesn't muddy things up like guitars, snares or voices trying to accentuate those low frequencies. I did notice you mentioned plugins so if you're sending it from a DAW that includes other instruments such as a synth, you'd want to follow the same conventions as a mixing board and sen it to the sub then have the sub send it to the main speakers.
  2. I'm thinking he might be referring to the Hi/Low EQ block which limits the high cut parameter to no lower than 1khz and the low cut to no higher than 1khz...for obvious reasons. Personally I can't even begin to imagine where one would need a high cut lower than 1khz. But who knows? Maybe someone has a use for a guitar tone that sounds like it's been wrapped in a soggy blanket? At any rate if you think this would be useful to a lot of people you can submit it on Helix IdeaScale and see if you can garner enough votes to compel L6 to do such a thing.
  3. THIS is an actual AC30 Fawn which you are modeling. Notice the speaker cabinet is an integral part of the unit. Why would there be a question on this?
  4. This is kind of an odd statement since impedance is very different from signal level, which is really what you need to have working optimally. The key there is to have your amp model channel volume or output block producing the same or close to the same signal level with the other blocks in your signal chain on or off. In essence you want to avoid having your signal chain level creep higher as you add other blocks. You also want to keep your signal chain level at a moderate range to allow for some headroom. Using the built in signal level meter I usually shoot for around 80%. As others have stated you can't really compare two different styles of output and expect them to be exactly the same because they're not even engineered to be the same. Another very key thing to be aware of is what cab model and mic model you're using on your signal chain with the amp because those things really matter when it comes to the ultimate tone you get. The default setting for the Vox amp may not be the best for the tone you want. My primary advice to someone in your position is to go onto YouTube and look up Jason Sadites postings. He's the best at helping new people get the most out of their units and helping you understand the best ways to maximize your presets.
  5. Honestly I couldn't tell all that much difference..at least not enough that I'd spend more than a few minutes adjusting things. The first one was the most overmodulated, second was the most consistent and smooth and the third was pretty close to the second but not quite as polished/finished. But then, I'm not really a dedicated cork sniffer when it comes to tone in isolation. I do most of my final adjustments in the context of the other instruments I'll be playing with. Just for reference I was listening through my Yamaha HS7 studio monitors and I decided to take some db readings on all three which bore out my initial perceptions. The first was VERY uneven ranging from maybe 86 to 98 or so, the second was very confined within the range of 85 to 89, and the final one was a pretty similar range until the end when it went into the 90 - 94 range.
  6. The way I do this is I control everything from the DAW (Ableton Live) by having a MIDI Out track in each song which sends the appropriate MIDI sequences to the Helix synchronized with the playing of the track in Ableton. In my case the Helix is just a passive MIDI device controlled entirely by the live track. To simplify things I use a Morningstar MC8 which is the coordinator of all MIDI actions. Each bank in the MC8 equates to a song so when I select the Next Song footswitch on the MC8 it sends the appropriate MIDI sequence to setup the Helix preset and when I hit the Play footswitch on the MC8 it launches the appropriate track in my DAW. From that point on the DAW sends all further automation actions to the MC8 as a simple single footswitch action through the course of the song. That allows me to have multiple complex MIDI actions within the MC8 controlled by one single MIDI Out action from the DAW.
  7. I guess it all depends on what the next Helix actually is and what it does, but I personally have no plans at all regardless of what it does at this point. I personally think the natural flex point in the processors used for modeling application was reached back when Helix was coming out as a competitor to Fractal's offerings primarily. Since then not a whole lot has changed in the DSP market but there area a LOT more competitors which would insinuate my theory of the natural flex point of the technology is probably correct. As has been noted previously, what modelers do and their processors can't be compared to the progress made in general purpose CPUs used in desktops and laptops, so you really can't think of them in the same way. I could always be surprised by some real innovation in that particular part of the industry, but even were that to be the case everyone would be scrambling for many months/years in product development to get their systems launched. Again that's another big difference between DSP and general purpose CPUs which actually are a composite of multiple processors including graphics and I/O processors. The best market representation would probably be how long it took Apple to actually develop and launch computers into the market using their new M1 architecture after they first began working on it about 10 years ago.
  8. Frankly I find these DSP spec comparisons a little bit silly given the purpose of DSP which is simply to do digital signal manipulations in a real time processing setting. The purpose is to transform a digital signal representation of the results of a given (usually analog) circuit. DSP limitations aren't typically about the quality or precision of the transformation (although it could be in some cases) but rather the amount of latency in that transformation given this it meant to be used in a real time signal processing environment. If the transformations are reasonably accurate, faster transformations won't likely make them MORE accurate in any distinguishable way, but it could reasonably result in less latency allowing for more transformations. In practice none of this affects me since I never run low on DSP because I use the DSP efficiently and have for over 8 years. In my opinion and practices adding faster or more DSP only services the needs of people that don't use the DSP efficiently.
  9. It kind of brings into question how you installed it when you say you ran the installer. Typically you simply download and run the install of the appropriate HXEdit version and everything else is automatic. What does Windows show as your active sound device driver when you plug into the Helix and the Helix is turned on? It should show '(Speakers) Line 6 Helix' or something like that...at least it does on Windows 10. If it shows anything else it may be that your Windows isn't automatically switching over to the Helix driver. You may have something configured differently in your device manager so take a look there and make sure it's showing all the various device drivers for your configuration available. I'm on Windows 10 and I haven't had any problems with it switching from the PC internal sound device to Helix ASIO when I turn on my Helix, although I do need to manually select Helix ASIO when in my DAW (Ableton LIve) if it wasn't turned on when I started the DAW. I haven't had to do any manual loading of the Helix ASIO driver in any of the updates as that's taken care of by the install if you follow the correct procedure in the software release info page.
  10. I seriously doubt they would be open to such a thing if for no other reason than their potential liability should a piece of third party software damage the functionality of a Helix unit from a bug in that software. Additionally it's entirely possible those control protocols can change, and have historically changed, with any release of firmware. Since they have no control over testing your software for compatibility with a design change there's a LOT of risk in a 3rd party software app breaking a new firmware update. I know there have been a number of attempts in the past to dissect the FBV protocol, none of which have ended up with much success. And I suspect the protocol being used on the Helix if probably similar in that it's some form of byte stream function and response, but there's a LOT more complexity in the control interactions with the Helix than what FBV ever had to do.
  11. If all you want is sound from Guitar Pro you should be able to do that through a USB connection. Is that what you're trying to do?
  12. The bass player in our group is perfectly happy going direct to the mixing board with just a Yamaha DXR12 floor monitor with the high pass filter turned off. That, along with our QSC sub woofer gives him a great sound.
  13. Are you talking about continuous controller such as a mod wheel, or a continuous controller as defined in an arduino? In either case those aren't really situations the Helix provides as part of it's implementation that I'm familiar with. The Helix is more of a single, simple command style implementation.
  14. DunedinDragon


    As you can see from the emoji responses you've got us all confused. Helix doesn't require any specific OS guitar driver. There is a OS specific driver for connecting a Helix to a computer via the USB port to use the HXEdit program and for the USB to function as an audio interface. Is that what you're referring to?
  15. I think you're going to find the MIDI implementation in the Helix is pretty limited in it's flexibility for doing such things. That's what prompted me to move to a legit dedicated MIDI controller which, in my case, is a Morningstar MC8 which controls everything including the Helix. This allows you to incorporate multiple MIDI messages whether it's triggered by the footswitches on the MIDI controller or if triggered from the Helix or any other equipment. For example when I change banks on the MC8 it (which equates roughly to a preset change on the Helix) it automatically sends any number of CC or PC messages to various devices that I want to coordinate with that preset. Very simple and very easy. If I need to copy a sequence of commands from another bank it's just a simple copy/paste operation.
  • Create New...