Jump to content

DunedinDragon

Members
  • Posts

    3,263
  • Joined

  • Last visited

  • Days Won

    76

Everything posted by DunedinDragon

  1. I'm thinking he might be referring to the Hi/Low EQ block which limits the high cut parameter to no lower than 1khz and the low cut to no higher than 1khz...for obvious reasons. Personally I can't even begin to imagine where one would need a high cut lower than 1khz. But who knows? Maybe someone has a use for a guitar tone that sounds like it's been wrapped in a soggy blanket? At any rate if you think this would be useful to a lot of people you can submit it on Helix IdeaScale and see if you can garner enough votes to compel L6 to do such a thing.
  2. THIS is an actual AC30 Fawn which you are modeling. Notice the speaker cabinet is an integral part of the unit. Why would there be a question on this?
  3. This is kind of an odd statement since impedance is very different from signal level, which is really what you need to have working optimally. The key there is to have your amp model channel volume or output block producing the same or close to the same signal level with the other blocks in your signal chain on or off. In essence you want to avoid having your signal chain level creep higher as you add other blocks. You also want to keep your signal chain level at a moderate range to allow for some headroom. Using the built in signal level meter I usually shoot for around 80%. As others have stated you can't really compare two different styles of output and expect them to be exactly the same because they're not even engineered to be the same. Another very key thing to be aware of is what cab model and mic model you're using on your signal chain with the amp because those things really matter when it comes to the ultimate tone you get. The default setting for the Vox amp may not be the best for the tone you want. My primary advice to someone in your position is to go onto YouTube and look up Jason Sadites postings. He's the best at helping new people get the most out of their units and helping you understand the best ways to maximize your presets.
  4. Honestly I couldn't tell all that much difference..at least not enough that I'd spend more than a few minutes adjusting things. The first one was the most overmodulated, second was the most consistent and smooth and the third was pretty close to the second but not quite as polished/finished. But then, I'm not really a dedicated cork sniffer when it comes to tone in isolation. I do most of my final adjustments in the context of the other instruments I'll be playing with. Just for reference I was listening through my Yamaha HS7 studio monitors and I decided to take some db readings on all three which bore out my initial perceptions. The first was VERY uneven ranging from maybe 86 to 98 or so, the second was very confined within the range of 85 to 89, and the final one was a pretty similar range until the end when it went into the 90 - 94 range.
  5. The way I do this is I control everything from the DAW (Ableton Live) by having a MIDI Out track in each song which sends the appropriate MIDI sequences to the Helix synchronized with the playing of the track in Ableton. In my case the Helix is just a passive MIDI device controlled entirely by the live track. To simplify things I use a Morningstar MC8 which is the coordinator of all MIDI actions. Each bank in the MC8 equates to a song so when I select the Next Song footswitch on the MC8 it sends the appropriate MIDI sequence to setup the Helix preset and when I hit the Play footswitch on the MC8 it launches the appropriate track in my DAW. From that point on the DAW sends all further automation actions to the MC8 as a simple single footswitch action through the course of the song. That allows me to have multiple complex MIDI actions within the MC8 controlled by one single MIDI Out action from the DAW.
  6. I guess it all depends on what the next Helix actually is and what it does, but I personally have no plans at all regardless of what it does at this point. I personally think the natural flex point in the processors used for modeling application was reached back when Helix was coming out as a competitor to Fractal's offerings primarily. Since then not a whole lot has changed in the DSP market but there area a LOT more competitors which would insinuate my theory of the natural flex point of the technology is probably correct. As has been noted previously, what modelers do and their processors can't be compared to the progress made in general purpose CPUs used in desktops and laptops, so you really can't think of them in the same way. I could always be surprised by some real innovation in that particular part of the industry, but even were that to be the case everyone would be scrambling for many months/years in product development to get their systems launched. Again that's another big difference between DSP and general purpose CPUs which actually are a composite of multiple processors including graphics and I/O processors. The best market representation would probably be how long it took Apple to actually develop and launch computers into the market using their new M1 architecture after they first began working on it about 10 years ago.
  7. Frankly I find these DSP spec comparisons a little bit silly given the purpose of DSP which is simply to do digital signal manipulations in a real time processing setting. The purpose is to transform a digital signal representation of the results of a given (usually analog) circuit. DSP limitations aren't typically about the quality or precision of the transformation (although it could be in some cases) but rather the amount of latency in that transformation given this it meant to be used in a real time signal processing environment. If the transformations are reasonably accurate, faster transformations won't likely make them MORE accurate in any distinguishable way, but it could reasonably result in less latency allowing for more transformations. In practice none of this affects me since I never run low on DSP because I use the DSP efficiently and have for over 8 years. In my opinion and practices adding faster or more DSP only services the needs of people that don't use the DSP efficiently.
  8. It kind of brings into question how you installed it when you say you ran the installer. Typically you simply download and run the install of the appropriate HXEdit version and everything else is automatic. What does Windows show as your active sound device driver when you plug into the Helix and the Helix is turned on? It should show '(Speakers) Line 6 Helix' or something like that...at least it does on Windows 10. If it shows anything else it may be that your Windows isn't automatically switching over to the Helix driver. You may have something configured differently in your device manager so take a look there and make sure it's showing all the various device drivers for your configuration available. I'm on Windows 10 and I haven't had any problems with it switching from the PC internal sound device to Helix ASIO when I turn on my Helix, although I do need to manually select Helix ASIO when in my DAW (Ableton LIve) if it wasn't turned on when I started the DAW. I haven't had to do any manual loading of the Helix ASIO driver in any of the updates as that's taken care of by the install if you follow the correct procedure in the software release info page.
  9. I seriously doubt they would be open to such a thing if for no other reason than their potential liability should a piece of third party software damage the functionality of a Helix unit from a bug in that software. Additionally it's entirely possible those control protocols can change, and have historically changed, with any release of firmware. Since they have no control over testing your software for compatibility with a design change there's a LOT of risk in a 3rd party software app breaking a new firmware update. I know there have been a number of attempts in the past to dissect the FBV protocol, none of which have ended up with much success. And I suspect the protocol being used on the Helix if probably similar in that it's some form of byte stream function and response, but there's a LOT more complexity in the control interactions with the Helix than what FBV ever had to do.
  10. If all you want is sound from Guitar Pro you should be able to do that through a USB connection. Is that what you're trying to do?
  11. The bass player in our group is perfectly happy going direct to the mixing board with just a Yamaha DXR12 floor monitor with the high pass filter turned off. That, along with our QSC sub woofer gives him a great sound.
  12. Are you talking about continuous controller such as a mod wheel, or a continuous controller as defined in an arduino? In either case those aren't really situations the Helix provides as part of it's implementation that I'm familiar with. The Helix is more of a single, simple command style implementation.
  13. DunedinDragon

    Heavy

    As you can see from the emoji responses you've got us all confused. Helix doesn't require any specific OS guitar driver. There is a OS specific driver for connecting a Helix to a computer via the USB port to use the HXEdit program and for the USB to function as an audio interface. Is that what you're referring to?
  14. I think you're going to find the MIDI implementation in the Helix is pretty limited in it's flexibility for doing such things. That's what prompted me to move to a legit dedicated MIDI controller which, in my case, is a Morningstar MC8 which controls everything including the Helix. This allows you to incorporate multiple MIDI messages whether it's triggered by the footswitches on the MIDI controller or if triggered from the Helix or any other equipment. For example when I change banks on the MC8 it (which equates roughly to a preset change on the Helix) it automatically sends any number of CC or PC messages to various devices that I want to coordinate with that preset. Very simple and very easy. If I need to copy a sequence of commands from another bank it's just a simple copy/paste operation.
  15. One of the things you'll encounter if you look at the higher priced powered speakers is they tend to have selectable profiles. This goes along with everything codamedia is referring to which is the frequency response profile of a speaker, not just the frequency range. For example a speaker set to a spoken speech profile will have different frequencies accentuated and some diminished giving spoken speech more clarity and definition, but it wouldn't be optimal for live music. Without these types of profiles you have no idea what that speaker was designed and optimized to do. The most prominent example that comes to mind was the original Alto speaker that was quite popular in the early days of the Helix due to it's lower price. There were constant complaints by users of how it had too much low end particularly when used as a stage monitor. But the Alto was optimized for use with recorded music and therefore had more prominence in lower frequencies to "sweeten" the bass response. This became a problem when it was used as a floor monitor because doing so creates an effect referred to as bass coupling with the floor further accentuating bass frequencies. Other more expensive speakers had profiles that you could set on the speaker to designate it was used as a main speaker or as a floor monitor that overcame these problems.
  16. You're probably right. I think I looked at the J8 not J8A. Still pretty rudimentary, budget speaker though made in Italy.
  17. The problem is not all PA systems are created equal, and there are some older and very limited PA systems out there. And the Helix is totally dependent upon the quality of the output device it's going through. In your case the problem starts with using passive PA speakers. The industry at large has moved on to using powered PA speakers on smaller PA's because it offers the opportunity to tune the speakers more accurately as it pertains to high and low frequencies. Passive PA speakers are notorious for having inconsistencies in their passive crossovers that can have a significant impact on electric guitar frequencies. These problems are typically not present in most decent headphone setups. I personally own and use a wide range of PA speakers all of which are powered speakers with their own built in amps and DSP circuits that control and maintain consistency in frequency response. I currently use Yamaha DXR12 and QSC K10.2 speakers for fronts and stage monitors.
  18. Yeah, I see the main difference in that you have it all in a single project whereas mine are in what Ableton calls Scenes so it's one scene per song, but the project can have multiple scenes as in a set of songs. Each scene stops playing at the end so that's free time to engage with the audience or tune or whatever. I then select the next song and send the MIDI command to start that scene all done from a Morningstar MC8. Each scene in my case has all of the tracks associated with that song including backing tracks and MIDI control tracks, so they all have to sync to the same tempo and time signature. It's not a problem changing tempos and such I just have to be careful about where I place the MIDI commands so they sync with the music tracks. The MIDI tracks send an appropriate MIDI CC trigger to the MC8 during the course of playing the tracks which then can execute any number of various stage control commands.
  19. I run all the MIDI stage automation for my performances in a similar way using Ableton. The most common way I get errant MIDI messages is when there is a problem in the BPM or time signature (3/4 versus 4/4 timing) in the track being played so that the MIDI messages get sent at the wrong time or the track being being run in Ableton is the wrong track or wrong time signature for the song being played. In my case it's pretty easy to spot since I'm also sending audio backing tracks, but the audio tracks will still play at the right tempo even though the timing on the track is wrong but the MIDI will get sent at the wrong time. I could also see similar issues if the timing of the band's performance doesn't match the timing in Reaper precisely. I'm able to catch all of these types of things when I do a practice runthrough of the performance at home. The question in my mind is how do you coordinate (selection and timing) the Reaper tracks with the song being played by the band? I could easily see getting the wrong Reaper track running intermittently as the cause for this type of problem.
  20. If that were the case audiences all over he world would be in a revolt against anyone using a Helix since most Helix units go directly through a PA system. That's kind of what they're designed to do. Going direct through a modern powered speaker is pretty much exactly the same as going direct through a PA. Of course not all powered speakers are equal as you tend to get what you pay for. As for myself I haven't gone through anything other than a modern powered speaker (mostly Yamaha DXR12 or QSC K10.2) in 8 years and have been perfectly happy.
  21. Quite honestly you really shouldn't have any problem doing what you're trying to do even with the Helix as an interface. I do pretty much exactly what you're trying to do every single week using Ableton Live 11 in literally hundreds of recordings with backing tracks and live guitar and sometimes with live piano and vocals using the Helix audio interface on Windows 10. The difference being I monitor the source signal coming from the Helix, not the recorded image that's being captured in the DAW when I'm recording so there is no noticeable latency during recording. Once it's recorded and I play it back it's all perfectly sync'd in the DAW. You can feel free to chase after audio interfaces or latency if you'd like, but there are still going to be noticeable differences if you continue to monitor your recorded image when you're recording rather than your source signal coming from the Helix.
  22. THIS is the key indicator that you most likely have a signal level gain staging issue within your patches (as determined by the channel volume on your amp model) which is adding noise to your output. What you're not telling us is where you have your signal gain staged at on the mixer and on any powered speaker outputs. I have my XLR output that goes to the mixer disengaged from the Helix volume knob and set to Mic level output (which sends a full volume signal to the mixing board). I control my output levels within my patches primarily through my amp model channel volume (typically set between 4 and 5) and additionally through the output block on my patch such that I get a reasonably clean signal level with the gain on my mixer channel set to about 1/4. Even then some of the amp models are just noisy and produce a fair amount of hiss based on the modeling of that amp, but most are totally silent. I no longer go direct from the Helix to my speaker, but when I did I had the 1/4" output set to line level and it was controlled by the Helix volume knob with my powered speaker set to unity level or at noon on the speaker's gain knob. What helps in this regard is that I have my Helix hooked up through my mixer when I'm dialing in my patches. That allows me to monitor the signal level on the mixer as I'm dialing in my patches and set the appropriate levels on my amp channel volume or output block to achieve a correctly gain staged signal level as measured on the mixing board.
  23. The two most important factors in latency are your buffer size and sample rate. The lower your buffer size and the higher your sample rate, the less latency you'll get in Ableton, but they will increase your CPU usage. I keep my sample rate (set in Ableton preferences) at 44100 and my buffer size fairly high to avoid clicks/pops. Since I don't monitor the output signal when recording I never notice any kind of latency. To me the battle is more about CPU usage than latency since that's where you're more likely to run into problems particularly if you're using a lot of plugins in Ableton. Some of your issues may be coming from your CPU usage as I'm working with an I7-6700 CPU running at 3.40 GHz on Windows 10.
  24. Our Bass player is perfectly happy with using a DXR12 as a stage monitor. I can't think of one good reason to lug around a DXR15. Of course we also have a KS 112 sub, but even a DXR15 won't fill that role.
  25. A lot of what you're asking about deals more with what kind of things you're doing and what you think you'll want to be doing in the next few years. For example, having access to a large group of amp models is important for some people that tend to play a range of different styles and music whereas it's not so important to someone that tends to have a select style they generally always play which sounds more like what you're doing. Effects are of course a different animal since you very much would likely need a range of effects even if you're not likely to need different amp models so something like an HXFX unit might be right down your alley. When it comes to MIDI controls, yes Helix does provide an answer in addition to their modeling and effects, but I'll be the first one to say it's not as flexible and powerful nor as inexpensive as a dedicated MIDI unit. I initially started using my Helix Floor as a MIDI controller several years ago but got so frustrated with the limitations that I got a Morningstar MC8 which solved any and all the problems I ever had with MIDI automation and just use the Helix for what it does best which is amp and effects modeling. For example it would be quite easy to have a single footswitch on my MC8 that could automate any number of changes to the Catalyst configuration as well as an HXFX unit. I would think that's where you'd likely get more out of it.
×
×
  • Create New...