Jump to content
Peckanina

Helix Native

Recommended Posts

I'm very excited about this.

 

The option to have a tight integration and portability between stage and studio was the missing piece of the puzzle.

 

To be able to effectively edit a tone post recording, independently of the hardware (or even while on the road), trying different tonal and creative fx solutions in context of a mix/arrangement AND being able to bring those patches seamlessly on stage is for me priceless.

 

The magnitude of this integration is potentially out of scale IMHO, and I believe will set the Helix in a unique spot on the market.

Share this post


Link to post
Share on other sites

The magnitude of this integration is potentially out of scale IMHO, and I believe will set the Helix in a unique spot on the market.

 

 

Unique for now, but I believe Fractal has announced they are also working on something like the. No idea when, though...

Share this post


Link to post
Share on other sites

Unique for now, but I believe Fractal has announced they are also working on something like the. No idea when, though...

 

Oh I'm pretty sure all the big players will at least consider this kind of integration sooner or later - Kemper was very close to that with the Virus TI but never did the step of having a true native plugin for it.

 

I'm super interested to see if Helix will prove him wrong.

Share this post


Link to post
Share on other sites

21469s depending on clock speed: 2.4 to 2.7 GFLOPS, an I7 is significantly faster, just had a quick look a 6700k is around 113 GFLOPS, so around 28 GFLOPS per core.

Oh, thanks, but I think it's not just those numbers, I mean, those numbers are what I'm hoping to matter most, but Helix is a dedicated signal processor, dedicated, specialized and optimized.

I'm hoping that pure computing power, which is (as you just showed) much bigger on the i7, will make up for the differences, and not just make up.

 

 

Simply because I have used the latest versions of Guitar Rig, and Amplitube thru both an iMac and a PC with plenty of processing power, and using a nice powerful audio interface, and neither one (to my ears) would make a pimple on Helix's lollipop sound wise. THAT is why I will be a skeptic until I try it myself.  :D

That may be a matter of modeling "quality" or accuracy, but there are more physical factors, like the input section of Helix.

I have a MOTU 828 MKII firewire that works like a charm... except for that part where I want to record guitar or bass straight to the computer, without any guitar gear in the signal chain, then the preamps, I don't know why, they don't sound right to me. If I use the same plugin with Helix with its own drivers (or the HD500, or the XTLive that came before), it sounds better to me.

With the MOTU, I can get lower latencies than with any USB stuff (including Helix), and a more stable performance, so I use the S/PDIF output on the Helix to the corresponding input on the MOTU. Best performance and best sound.

Bottom line: the input section of your audio interface has a huge impact on the sound you're gonna get out of any processing done in a computer, and the latency affects the feel.

Share this post


Link to post
Share on other sites

Regarding the CPU load of Native, based on Reaper's CPU meter, an instance of Native seems to be inline with other VST instrument plug-ins I have (SSD4, for example). The trickiest part I've found is getting the settings on my interface optimized for latency for input monitoring. How many instances you can run simultaneously will depend on a lot of things, but I've been loading tracks up with Native and other effects (I tend to use an instance of Slate's VMR Virtual Mix Rack plug-in on nearly all my tracks, for instance).

 

Anyway, I think Native is going to be a game-changer for many people...

  • Upvote 2

Share this post


Link to post
Share on other sites

Regarding the CPU load of Native, based on Reaper's CPU meter, an instance of Native seems to be inline with other VST instrument plug-ins I have (SSD4, for example). The trickiest part I've found is getting the settings on my interface optimized for latency for input monitoring. How many instances you can run simultaneously will depend on a lot of things, but I've been loading tracks up with Native and other effects (I tend to use an instance of Slate's VMR Virtual Mix Rack plug-in on nearly all my tracks, for instance).

 

Anyway, I think Native is going to be a game-changer for many people...

It's latency I'm most worried about ruining the feel of playing compared to the HElix hardware. Reaper currently say 4.8ms latency but that's without a modeller running so I don't know how that will feel.  I've only ever recorded with hardware modelling or a real mic in front of a real amp so I have no idea how well this plug-in way works in comparison.

Share this post


Link to post
Share on other sites

We don't know how CPU-intensive Helix Native will be yet, as optimization is nearly always the last job to do.

Share this post


Link to post
Share on other sites

It's latency I'm most worried about ruining the feel of playing compared to the HElix hardware. Reaper currently say 4.8ms latency but that's without a modeller running so I don't know how that will feel.  I've only ever recorded with hardware modelling or a real mic in front of a real amp so I have no idea how well this plug-in way works in comparison.

 

Well, the latency with software monitoring will depend on the particular interface you're using, the driver, the DAW and your particular system. I'm able to get pretty decent latency (decent being defined as I don't notice it), but it takes messing around with my driver settings.

 

I think that I would still most likely record guitar parts using the hardware Helix and just record dry tracks. Hardware monitoring is just so much less fussy... Once the dry tracks are in the DAW, all those ASIO settings become less important.

  • Upvote 1

Share this post


Link to post
Share on other sites

I think that I would still most likely record guitar parts using the hardware Helix and just record dry tracks. Hardware monitoring is just so much less fussy... Once the dry tracks are in the DAW, all those ASIO settings become less important.

 

 

I suspect this is the way I will use Native. Not sure yet.

Share this post


Link to post
Share on other sites

What would be considered good latency that the average human wouldn't notice?

 

It varies from person to person, but the consensus seems to be that latency becomes noticeable between 7 - 10 ms (some say they can notice as low as 5 ms, but that seems hard to believe).

 

Consider this, there are cheap wireless units out there that have latencies approaching 8 ms...

 

I was using my G10 (which has a latency near 3ms) into my interface and then through my DAW, and I got it to the point where it felt OK.

  • Upvote 1

Share this post


Link to post
Share on other sites

It varies from person to person, but the consensus seems to be that latency becomes noticeable between 7 - 10 ms (some say they can notice as low as 5 ms, but that seems hard to believe).

 

Consider this, there are cheap wireless units out there that have latencies approaching 8 ms...

 

I was using my G10 (which has a latency near 3ms) into my interface and then through my DAW, and I got it to the point where it felt OK.

Analog piano on average takes 8ms for the sound to reach your ear after hitting the key. Not all that perceivable.

 

Though I do hardware monitoring for a reason as my thru-put latency is usually too high to be usable. Up in the 12-20ms area.

Share this post


Link to post
Share on other sites
We don't know how CPU-intensive Helix Native will be yet, as optimization is nearly always the last job to do.

 

 

Meaning to me that if it is more CPU-Intensive this may affect latency when monitoring the record track (the ability to listen to what you are recording in real time while playing back the other tracks).

 

That may be a matter of modeling "quality" or accuracy, but there are more physical factors, like the input section of Helix.

 

The Hi-Z input on my UAD Apollo Quad is made for guitar/bass input impedance matching. My skepticism comes from experience with software modelers. I just never have liked the tones from any kind of "software" only (Amplitube-GuitarRig) thru Mac or PC hardware no matter what interfaces I've used. They sound fake and unrealistic compared to say an Axe FX II, Kemper or with my Helix (yes I've owned all three). OTOH the same monitors thru the same interface I used on just software sound gorgeous when I use them with Helix. As always YMMV. I do hope for the 1st time with Native,  I can tell no difference. I wish L6 the best of luck and I am willing to try once more and see.  :)

Share this post


Link to post
Share on other sites

 

 

The Hi-Z input on my UAD Apollo Quad is made for guitar/bass input impedance matching. My skepticism comes from experience with software modelers. I just never have liked the tones from any kind of "software" only (Amplitube-GuitarRig) thru Mac or PC hardware no matter what interfaces I've used. They sound fake and unrealistic compared to say an Axe FX II, Kemper or with my Helix (yes I've owned all three). OTOH the same monitors thru the same interface I used on just software sound gorgeous when I use them with Helix. As always YMMV. I do hope for the 1st time with Native,  I can tell no difference. I wish L6 the best of luck and I am willing to try once more and see.  :)

I felt that way for a while too, as this was my experience until I came across EZMix 2. (it is a preset box, with 2 adjustable parameters) Some of the guitar tones in some of the presets of the expansions can yield some very nice results. (as many of them are already dialed in for "in a full mix" context)  Better than anything I got out of the Pods, or Pod Farm.

 

This, along with CLA guitars plugin from WAVES, has removed any kind of doubt that the software can get there. It is only software in the Helix right now...  If you have quality input, and A/D/A, your only worry really should be about the latency if you don't intend to wet monitor through hardware.

IMO These are better sounding plugins than Amplitude, or Guitar-rig.

Share this post


Link to post
Share on other sites

Unique for now, but I believe Fractal has announced they are also working on something like the. No idea when, though...

Fractal is not releasing a plugin. I called them directly and spoke to a rep. He said they took it off the table

Share this post


Link to post
Share on other sites

This, along with CLA guitars plugin from WAVES, has removed any kind of doubt that the software can get there. It is only software in the Helix right now...  If you have quality input, and A/D/A, your only worry really should be about the latency if you don't intend to wet monitor through hardware.

IMO These are better sounding plugins than Amplitude, or Guitar-rig.

 

 

I haven't tried those yet, and since I'm happy with Helix I probably won't. You mention quality A,D,A. That involves expensive chips and (speculation) I just do not think a regular computer has these (versus what Helix, Axe FX has), which may be one of the problems. 

Share this post


Link to post
Share on other sites

Meaning to me that if it is more CPU-Intensive this may affect latency when monitoring the record track (the ability to listen to what you are recording in real time while playing back the other tracks).

 

 

Nope. Actually, I am pretty sure it won't.

Share this post


Link to post
Share on other sites

I haven't tried those yet, and since I'm happy with Helix I probably won't. You mention quality A,D,A. That involves expensive chips and (speculation) I just do not think a regular computer has these (versus what Helix, Axe FX has), which may be one of the problems. 

I must correct you my friend.

 

Onboard audio which is what you are talking about. (even if you don't know it)  

Onboard audio (a chip on the motherboard) have come a long long way. People use to buy soundcards because onboard audio was crap.  Now you can have stellar onboard audio if you don't cheap out on your pc, look at gaming. Better yet if you build it yourself.  (which is also completely accessible these days)

 

That said if you are recording guitar you will be using an external soundcard. That is what your Apollo is. That will be the unit that has the A/D/A in it. Everything that is happening in the PC is all still Digital.   Interface takes analog to digital - usb to PC/mac - into your DAW - out your USB to your Apollo - through its digital to analog converter and out to your speakers.  This is whether you are using software for FX, or not. 

 

The interface, or sound card is what houses the A/D/A process. As long as you have a decent one of those you will get decent results.  Also decent interfaces have become quite affordable, and accessible lately. You do not need an Apollo to get great results.  You could probably get great results out of cheaper interfaces like Focusrite Scarlett series, albiet it may not sound clinically identical to the Helix.

 

Your Apollo has all the A/D/A you will need. It is a quality interface. 

 

I think the difference you were hearing between software, and hardware was the coding/engines/algorithms of the software you were using just weren't up to snuff compared to Helix/Kemper/Axe FX.  More-so than any A/D/A that was happening.

 

Also to clarify a post above.  If a plugin is more cpu intensive a lot of times it is designed that way so it will have less latency. This was a route that has opened up the last 5 years with the power of CPU's going through the roof. Some plugin designers go the other route (mainly for mixing/mastering plugins) where it uses less cpu (so they can have more open) but increases latency. This is also this is a route that is opened due to the advancements of "Automatic Plugin Delay Compensation."

Share this post


Link to post
Share on other sites

Analog piano on average takes 8ms for the sound to reach your ear after hitting the key. Not all that perceivable.

This is a phenomenon that's often cited in latency discussions but some details are missing.

 

While it's absolutely true that audio time of flight can cause delays greater than the 7-10ms that can cause perceivable latency, there's a reason why one is not irritating while the other one is. In the case of audio time of flight through the air, your brain gets cues (e,g, reflections) that help it place the source and compensate for the delay. With "wire latency" there are no such cues. That's why you can use a wireless pack and roam far from your amp 30-50ft and have time of flight far beyond what would be tolerable latency in a recording environment.

  • Upvote 1

Share this post


Link to post
Share on other sites

This is a phenomenon that's often cited in latency discussions but some details are missing.

 

While it's absolutely true that audio time of flight can cause delays greater than the 7-10ms that can cause perceivable latency, there's a reason why one is not irritating while the other one is. In the case of audio time of flight through the air, your brain gets cues (e,g, reflections) that help it place the source and compensate for the delay. With "wire latency" there are no such cues. That's why you can use a wireless pack and roam far from your amp 30-50ft and have time of flight far beyond what would be tolerable latency in a recording environment.

Thank you for wording this in a way that makes the concept I have only started to fabricate in my head more reasonable sounding lol. (it wasn't near as clear as how you put it).  It was a concept I started thinking of as clues that come before.  Almost like a bullet travelling through the air (removing human limitations for a second) the split of the air (friction that is heating the head of the bullet) would actually hit the target before the actual bullet does.  Taking this concept to a usable, or even a measurable level is kinda what you are talking about.  Its a pre-user-feedback (not signal feedback) in a way that makes the piano, and stuff like that much less of an issue. 

 

Ahh makes sense, and I appreciate this post.  Anything to further understanding on the matter, and I think you have brought up a damn good point. 

Share this post


Link to post
Share on other sites

This is a phenomenon that's often cited in latency discussions but some details are missing.

 

While it's absolutely true that audio time of flight can cause delays greater than the 7-10ms that can cause perceivable latency, there's a reason why one is not irritating while the other one is. In the case of audio time of flight through the air, your brain gets cues (e,g, reflections) that help it place the source and compensate for the delay. With "wire latency" there are no such cues. That's why you can use a wireless pack and roam far from your amp 30-50ft and have time of flight far beyond what would be tolerable latency in a recording environment.

 

What's interesting is the definition of 'compensate for the delay' in this context, or why is the brain compensating based on certain cues? There's been research done that suggests the brain unconsciously makes decisions before it's consciously aware of the decision (is freewill an illusion? lol). Could this idea have something to do with the underlying mechanics of audio time of flight through air - for which greater latencies seem acceptable - versus a wired, virtual construct of sound - for which tolerance seems very low? Why is a compensation factor apparently absent for virtual sound? Is it simply a lack of cues or input information? Would the brain eventually create the needed compensation?

  • Upvote 1

Share this post


Link to post
Share on other sites

 What's interesting is the definition of 'compensate for the delay' in this context, or why is the brain compensating based on certain cues? There's been research done that suggests the brain unconsciously makes decisions before it's consciously aware of the decision (is freewill an illusion? lol). Could this idea have something to do with the underlying mechanics of audio time of flight through air - for which greater latencies seem acceptable - versus a wired, virtual construct of sound - for which tolerance seems very low? Why is a compensation factor apparently absent for virtual sound? Is it simply a lack of cues or input information?Would the brain eventually create the needed compensation?

Your entire post brings a good perspective, but I wanted to comment on this for a moment.

 

I think possible eventuality is yes. Speaking from an evolutionary point we have compensated for "blue light triggering the brain to mean slightly less dangerous time/area," to working night shifts, and working with lots of blue light. ...even though it is still a large symptom of insomnia. 

 

As far as the rest of your post, I do not know if the lack of "cues" is the sole variable, but I do think it is one of them. 

Free will is an illusion if you define it as an absolute, and not somewhat dynamic. You flinch from fire unless you purposely teach yourself over time. Or you cannot look in a 360 direction simultaneously no matter how hard you want due to human/physical limitation. Fate can also be viewed as a snapshot of a trajectory as a result of multiple, and growing outcomes of previous events.

 

I think in some form everything "has something to do" with everything. Its just how far "down the rabbits hole" (so to speak) one wants to go. 

Look at scientific advancement for a moment. If one field gets a break through in their research, or tools, etc.  Then many other fields often get break through(s) in their own field as a result, even though they don't seem to "have anything to do" with that initial field, and their break through. Elementary example, but should convey. lol

 

Great posts.

Share this post


Link to post
Share on other sites

I think in some form everything "has something to do" with everything. Its just how far "down the rabbits hole" (so to speak) one wants to go. 

Look at scientific advancement for a moment. If one field gets a break through in their research, or tools, etc.  Then many other fields often get break through(s) in their own field as a result, even though they don't seem to "have anything to do" with that initial field, and their break through. Elementary example, but should convey. lol

 

Great posts.

 

A good example of this might be to ask where would current technology be if we never landed on the moon. Or another might be if the inner-workings of atoms and particles were never discovered. In the second example, certainly no one would be discussing Helix Native.

Share this post


Link to post
Share on other sites

Wow, what a fascinating discussion this turned into. Talk about a "rabbit hole", I could spend days discussing whether free will exists but how could I be sure that conversation was not predetermined? ;)

I had never heard that a piano has additional latency but I guess it makes sense. I am assuming this is due to the fact that when you strike a piano key you are not directly plucking a string. The piano key has to first activate a hammer which has to move through space and only then strikes the string. I wonder if experienced piano players get used to subconsciously playing slightly ahead of the beat to compensate for this.

I think in addition to the physical cues that aleclee mentioned, knowing the mechanism underlying additional latency can make it easier to tolerate it. So in effect perceived physical cues and/or cognitive ones enable us to better compensate for environmental variables. In the case of a piano, awareness of the requirement of the hammer needing to strike the string, or in the case of a wireless far from the stage, knowing the physics involved with the speed of sound. How many people really understand the underlying reasons for why their driver or OS is causing additional latency and more to the point, where is the benefit? In the case of the piano or the wireless there are tangible rewards in exchange for the latency, e.g. the glorious tone of a good piano or the freedom of wandering around on stage or in the audience untethered. A transactional model can be applied in these cases where latency can be more tolerable if there is a benefit perceived, in effect it is a trade-off.

One note - I have noticed that as the wireless signal level decays, particularly on some older wireless systems, the notes can cutoff at the amp/monitor more quickly than wired setups. This lack of sustain can make the latency more noticeable in wireless systems as it causes gaps of silence between notes/chords.

Ultimately I can't help but think that it might be nice convenience to have a Helix USB audio interface (other than the Helix or LT itself) that had the same input/output and A/D conversion specs as the Helix for ensuring that the sounds designed with Native translate as seamlessly as possible to the Helix and vice versa. Until then for owners of both a Helix and Native you can just use the Helix as your audio interface for the highest level of portability between presets designed on the computer and those designed on the Helix.

Share this post


Link to post
Share on other sites

I've been told 2017 spring's end seems to have a huge latency... maybe a couple of months...

  • Upvote 1

Share this post


Link to post
Share on other sites

I've been told 2017 spring's end seems to have a huge latency... maybe a couple of months...

 

Share this post


Link to post
Share on other sites

 

I've been told 2017 spring's end seems to have a huge latency... maybe a couple of months...

Ahahah

Share this post


Link to post
Share on other sites

Ahahah

 

Are you laughing at your own joke? :huh:

  • Upvote 1

Share this post


Link to post
Share on other sites

Nope. Actually, I am pretty sure it won't.

 

And I hope you are right!   :D

 

They have indicated, on more than one occasion, that the software sounds the same as the hardware. I have no reason to doubt that this is the case.

 

 

Keeping this for future reference for my Crow sandwich with mustard (or "theirs")...  ;)

Share this post


Link to post
Share on other sites

Thats because the code in Guitar Rig and Amplitude isn't as good as the helix code, it has nothing to do with the processor it is running on.

 

Then why does Fractal Audio and Line 6 use Sharc-2 Processors, if just any old thing will do?

Share this post


Link to post
Share on other sites

Our brain itself uses some latency, to compensate stimuli coming from body parts further than others (to perceive your foot at the same time of your nose). the taller you are, the more latency you live with. That's also the cause of DejaVu. You've already seen what you are seeing, or heard what you are hearing, just few milliseconds ago.. Some bug in your latency compensation code :)

Share this post


Link to post
Share on other sites

Just some quick math based on the speed of sound in air, wired connection where most folks would say there is zero latency:

 

standing 11 feet from your guitar amp and cabinet = 10ms delay

 

standing 26 feet from your cabinet = 23 ms delay

Share this post


Link to post
Share on other sites

There's been research done that suggests the brain unconsciously makes decisions before it's consciously aware of the decision (is freewill an illusion? lol).

 

Based on the staggering amount of poor decisions I've made, freewill is not an illusion.

Share this post


Link to post
Share on other sites

So from reading above the approximate speed of sound in air is around 1 ms per second  foot. If your amp is right next to the drummer, and both are 11 feet from you, then both you and the drummer will sound in sync with each other (where you are) provided both can keep a beat.    ;)

Share this post


Link to post
Share on other sites

Based on the staggering amount of poor decisions I've made, freewill is not an illusion.

 

Did you post this on purpose? Or was it predestined...

Share this post


Link to post
Share on other sites

Based on the staggering amount of poor decisions I've made, freewill is not an illusion.

 

On the contrary, this gives you a great excuse for your poor decisions... :)

Share this post


Link to post
Share on other sites

I'm confused, what do illusions have to do with contraries?

Share this post


Link to post
Share on other sites

Did you post this on purpose? Or was it predestined...

I'm gonna say the post I quoted was predestined, but my post was on purpose ðŸ˜

Share this post


Link to post
Share on other sites

On the contrary, this gives you a great excuse for your poor decisions... :)

Way too long ago I took a philosophy course and the main thing I learned is you can philosophise your way out of anything. I however opt to take full credit for my stupidity. How else ya gonna learn? Unfortunately for me, I was a poor student. I'm doing better now.😉

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×