Please ensure Javascript is enabled for purposes of website accessibility Jump to content


  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by jeremyn

  1. It seems the update process is more sensitive to cable issues than other types of USB traffic. Maybe the update protocol doesn't have as robust error checking/retry logic as regular USB class comms. Long cables are not just more susceptible to EMI and other interference, they can generate their own due to current/ground 'loops' in the cable shield. USB is balanced, but not all transceiver circuitry is created equal, and most USB devices are powered with no avenue for ground loops, whereas the Helix has it's own ground, and so does the Mac. So it is possible there is some high frequency leakage through the grounds causing some level of noise to degrade the signal. Not enough to notice during normal use, but enough to upset the more fragile update protocol of the Helix. Some cables may also be marginal due to a non-ideal characteristic impedance causing signal distortion/reflections of the digital signal - not bad enough for total failure, but enough to cause problems once noise is added to the signal. Just as computers and devices aren't all ideal, not all cables are perfect either, especially when it comes to the high bandwidth 480MHz digital signalling with high frequency spectral content into the GHz range. Luckily the simple solution of a short cable works here. I'll remember that the next time I do an update. The normal cable I use between the Helix and Mac clearly didn't make the cut either.
  2. Same happened to me. I assume it was some sort of USB error that caused it. I ended up rebooting my Mac. The next update (after rebooting) seemed to work, but then had the same message on rebooting. Tried again (probably about the 5th time at this stage), and it finally completed and booted successfully. Then did the factory reset just to be sure, and restored my backed up patches and IRs. Was on the edge of my seat for most of it. But, it's fine now. Never had a problem with previous updates or computer connection issues. But, who knows, maybe I bumped the cable or something was running in the background on the Mac when I did the first update that 'failed'. I wasn't using any other Helix, audio or MIDI software at the time (nothing running in the background either), but had a bunch of Safari browser windows open, so who knows. I'm on a MacBook Pro from around 2013 with the latest version of OSX (10.13.1). But, I don't think this issue is OS related. Came good in the end. So it's not a permanent failure mode. (edit: I also changed to a shorter cable 'just in case' after the first failure, but it still failed a couple more times with the new cable until I rebooted the computer.)
  3. One 'two channel' thing I've heard people get away with in a typical live venue are big obvious fast sweeping ping pong delay effects. Even if you're sitting at one side, you still hear some variation of the effect, and it still sounds ok depending on what the rest of the band are doing at the time. Things to avoid are panning instruments/vocals one way or the other where the mix will sound 'out of whack' for anyone not sitting in the central part of the sound field. Same applies to modulation effects (like chorus/phase), stereo reverb/separation, and frequency dependent channel allocation. Basically, stereo doesn't do much for anyone but the people in the middle of the sound field. Yeah, standing in the centre with a swirling chorus/phaser and stereo reverb can sound magic, but it won't be representative of what most of the audience is hearing.
  4. If you put speakers on the floor you'll have extra bass. Also studio monitors are designed to be flat frequency response, unlike 'hifi'/computer/car speakers that have hyped bass and treble. You can compensate for this by using a typical 'loudness' EQ curve on the monitors if you prefer that sound at home. But, whatever you do, don't try to use that much bass on your guitar for a live performance or you'll get lost in the mix and sound like 'mud'.
  5. The Helix AUX input is 10k Ohms, not 10M Here's a post from DI explaining why there might be some confusion:
  6. jeremyn

    Tone matching!

    Add the captured IR after your cab in the chain. You're basically 'copying' the EQ profile of the plugin from the computer and inserting it into Helix chain instead. Then you should have the same output from the Helix as you had from your DAW.
  7. jeremyn

    Tone matching!

    You could always use an IR capture tool to produce an IR of the isolated ozone plugin (running the matched settings). And then load the IR into Helix.
  8. This is my experience too. The preamps and converters these days are so linear with noise floors below the input signal (assuming you gain stage correctly). Latency is the main difference, and that is usually a driver related thing. If the Helix is using the dual input ADC method, then it is probably the same method that Yamaha used in their Magicstomp and other products. This was a common method in the '80s and '90s before the days of commodity high bit depth sigma delta converters, especially for instrumentation systems and industrial controls. The principle is to connect two (or more) ADCs to the same input, with the 'secondary' ADC taking the input from a low noise preamp set to a fixed gain (eg. +12dB, -12dB if looking at large signals, or even have the primary at +6dB and the secondary at -6dB). The firmware in the microcontroller/DSP then chooses the sample from the converter with the highest value that isn't clipping (+ other tweaks based on hysteresis around the crossover point(s)). This maximises dynamic range without artefacts when sudden transients appear. Analog gain trimming and/or digital calibration is used to avoid non-linearity at the crossover point. The only drawback is that it uses an extra ADC converter. But, since most ADC converters provide a pair of channels (or groups of 4/8/etc), using an extra converter for your most critical input(s) is not usually a big problem. That's why Helix has 'seven' analog inputs. 4 returns, 1 mic, 1 aux, and 1 high dynamic range guitar input using two ADC inputs. This means pretty much any signal can be inserted into the Helix's guitar input without worrying about having to set the 'best' gain for maximum SNR without clipping. It lets Helix do all the gain increases/reduction in the digital domain. So IMO, the Helix is an excellent candidate as a primary guitar input. A second low latency interface might be useful if you want to record a live band where you want low latency for soft synths / samplers.
  9. I'm on FW version 2.21 and used Template (04A) which is the Guitar + Mic preset. With that preset the output becomes audible once the preamp gain is increased and the volume pedal is toe down. I use a Rode S1 which needs 48V phantom power (doesn't work at all with 24P and gets flakey with some sources that don't provide sufficient current). It works fine with the Helix. Also plug your mic in while the unit or phantom is off. Hot plugging with phantom active can damage equipment depending on the capacitors and series limiting in the mic/gear. I have no information on the Helix or Beta 87A so can't say either way for those. Better to play it safe if unsure and only plug with the powering device off. So if your mic and cable are good and your headphones are working to hear your guitar. Then it might be a hardware fault in your Helix.
  10. The gain is quite low by default, so you need to play with the gain on the Studio Pre and/or Compressor. Once you dial it up, you'll be able to hear it coming through your headphones.
  11. No I don't. I'm not a lawyer either. Sadly, the 'seemingly' is often because the extension is blatantly obvious. Working around those sorts of patents is pretty simple, but sometimes the 'seemingly insignificant' innovation is actually so obvious that it is already being used in general practice. Prior art isn't always easy to find if you're not looking in the right place. The patent office looks at other patents for prior art, and if they're lucky the patent examiner has more than a cursory background in the field they are examining and strikes something down as obvious. In many cases, the prior art is in a 50 year old text book, and is a technique in common use. Still, fighting against a patent like this costs huge amounts of money. The current bar for granting a patent is a 'scintilla of inventiveness'. Seriously, the tiniest little thing that may be blatantly obvious to anyone skilled in the field pertaining to the patent and you're good to go if the examiner isn't also skilled in the particular art he is examining. I've been involved in cases presenting evidence to have a patent re-examined and/or dismissed. That's just my field where someone was paying (a lot of money) to be able to do something that was already in common practice - just gotta spend the money in court to prove it. Copyright is different in that it is supposed to be impossible to accidentally come up with the same copyrighted material. With a patent, it happens in way too many cases. Not because someone read the patent, but because the patent doesn't offer anything substantial enough to be unique in its particular field. Yes, there are some excellent patents that offer huge boosts to innovation. But, the vast majority do not, IMO.
  12. Let me ignore the copyright issue at the moment, and look at the technical aspects of how one would determine if one or multiple IRs were 'copies'. Like music and video, it's pretty much impossible to effectively 'protect' an IR with DRM schemes and the like if the device can be used with the IR alone (and no other effects). This is because an IR can always be recaptured by taking an impulse response of the device that is using it. Cryptography can get you so far, but when you need to decrypt it so the end user can use it, the non-encrypted plaintext must be made available to them, and at that point is exposed to being extracted and the DRM stripped. Once recaptured (or directly extracted/copied), it is then possible to compare two IRs to see how closely they match. That doesn't prove provenance, but it does show that they probably came from the same source. When IR captures are made there are small random variations between responses. One short IR may be a low probability coincidence due to being captured with the same mic/cabinet/room combination. A second or third in the copied pack would make this doubtful. However, again, it still doesn't prove provenance. Although the original person, would likely have multiple other responses from the same session that could be used to show that the IR came from that original piece of hardware, and the 'copier' may not even have enough knowledge to demonstrate how they made them. Using spectral analysis it is possible to reduce a waveform (including IRs) to a subset of components that can be statistically compared even if they have been slightly modified. These methods can be used to compare recordings that have gone through further analog processing with minimal distortion/noise added. And would easily show a truncated/windowed version matches a longer impulse. When comparing combined IRs, it would be quite simple to check if a resultant IR was made with two given input source files, as the transformation is completely linear. If an IR is made by added two together, the third IR could be extracted by simple subtraction (or deconvolution) if one of the first IRs is known. Then comparisons could be made as above. The processing steps become more complicated if filtering and other transformations are made, or if you don't have any idea what the original constituent IRs were. But, if lets say you were CompanyX and suspected CompanyY to have 'done the sneaky' and combined a couple of your IRs. Then technically it is possible to make a piece of software that tested CompanyY's library of IRs for every possible combination of CompanyX's published collection. Now, let me return to the copyright issues. Copyright is designed to protect creative works, so a court would have to make a determination whether or not an IR is a creative work, or some portion of it constituted a creative process. The next hurdle is to show that the 'copier' must have copied your work (either because they told someone or it is statistically highly improbable beyond within a preponderance of evidence that a copy (or set of copies) was generated independently). IMO copyright has far exceeded its original mandate, where creators are being protected from someone writing a complete story 'based on someone's characters' (this should be about Trademarks, not copyright), or tiny snippets are being 'protected' because of 'likenesses'. No, copyright is about someone copying someone else's work with minimal additional effort and distributing it themselves. It's about encouraging creative works to be created and published where that would otherwise not have happened. That was in an era where it was hard to publish works, and the protections were really about stopping PublisherA from selling works from PublisherB. The laws never really did much for the creators of those works, beyond what would have happened without said laws being in place. Copyright terms have been extended to unreasonable lengths at the behest of big media companies that don't want to have to compete with their own older works. That's why it sometimes referred to as the 'Disney copyright extension act'. Also, the music industry capitalised on the fact that they could let people 'share' music with each other as a form of free advertising and then sell the cassette/LP/CD to people that really wanted the original. Back in the day of cassette tapes, degeneration occurred between copies, so it was always desirable to by the LP or CD for that boost in quality. These days you can digitally make a perfect copy of a track by sharing the wav or mp3. Courts have protected as 'fair use' the concept of two friends sharing copyrighted media. Where they draw the line is on distribution where someone uploads a file to a publicly accessible location (torrents/fileshare/etc) where it can be downloaded by anyone. There are three branches of 'IP': copyright, patents (with two subsets, one functional and one design based), and trademarks. They each have different requirements and protection. Copyright protects creative works which were usually substantial enough that it was obvious if something was a copy or not, they also originally required registration for a 14 year term, with an optional 14 year extension should the copyright holder desire it. These days it is 70 years past the death of the creator with no registration required. Patents are generally reviewed by the USPTO prior to granting, but numerous ridiculously non-inventive/obvious patents are granted every day. These are protected for a 20 year period with no extensions. Functional patents protect the implementation of a concept. Design patents protect the aesthetic 'look and feel'. They are not the same. Trademarks protect brand recognition, and last for as long as the trademark owner keeps renewing it. (Potentially forever if the company never goes out of business or sells their trademark.) For example, Mickey Mouse's name and image are trademarked, so can never 'fall out of copyright'. Trademark is designed to protect from 'market' confusion where a violation (or similarity) would confuse someone to think the violator was associated with the original brand. It must be registered within a market segment (although big companies register in as many as possible because the cost is negligible compared to the benefits), that's why anyone can sell apples as a fruit, but not 'Apple' computers without running into legal trouble. IRs come under the copyright banner, but are mostly protected by the morality of end users. It is possible to only distribute something under specific contractual terms that are agreed prior to sale, but that would very likely reduce sales volumes to far below the levels of just 'trusting the enduser's moral code'. In the end, it is probably better to leave the water legally untested, as an expensive court case may actually deem that IRs are not sufficiently creative and therefore don't fall under the protection of copyright. If that determination is made, the ethics of copy/distribution changes, and guilt will not be a barrier to re-distribution en masse. If the determination goes the other way, it opens up a legal mine field where only lawyers benefit.
  13. Sorry for the overly technical posts, as an electronics & signal processing engineer I do a lot of DSP programming on other platforms. There are so many variables when designing efficient DSP code that it is actually quite difficult to do things like putting up a single 'DSP indicator' bar. Putting up multiple bars would likely just confuse users. I can see why people get confused about these things. Here's some more relevant technical info for those interested: On a desktop running DAW plugins, the operating system abstracts all the hardware interface complexity which means code is written more generically. This causes adding extra plugins to progressively load up the processor until eventually it can't keep up, and you get clicks/pops. Latencies are also higher, which means the overheads can be minimised when breaking up tasks into bigger chunks that might take 5ms+ to run per 'time slice'. Whereas, the Helix is aiming for minimal latency and can't afford the overhead of chopping in and out of a memory intensive 'process' without wasting processing time with an extremely short 'time slice'. For example in a 1 second period, a desktop machine with a 5ms time slice would copy in/out of cache memory 200 times. Whereas a low latency platform would need an extremely short time slice of 0.1ms of faster, which would end up with 10000 copies in/out per second. To avoid this whole issue, DSP code is usually implemented with a 'round robin' processing loop that calls various sub-tasks that all must return in a defined amount of time, and are precoded to only use a certain subset of resources to avoid copying things in and out of 'fast' memory. I've run systems that had sub-sub-tasks that shared a block of 'fast' memory for higher latency operations (eg. long tail reverbs) that can be 'chunked' into larger timed blocks (eg. 10ms chunks). But, even that is complicated and can cause hard to predict resource starvation. A lot of fun can be had in this area, but it gets really complicated and people can easily get confused when the end functionality seems exactly the same as a DAW, even though it has a completely different implementation under the hood. I'm excited to see what Line 6 have done for the Helix Native application. There are likely to be a lot of external similarities with a lot of differences under the hood, as a desktop and a DSP platform are usually optimised very differently.
  14. I have no idea how the code has been written in the Helix, and I haven't ever coded for this series of Sharc. But, when creating certain complex DSP code segments it is sometimes useful to dedicate a section with hand coded 'modules' to use hardware as efficiently as possible. This usually involves preallocated methods of sharing L2 and L3 cache memory, and in some cases dedicated acceleration hardware that would cause too much latency if 'overloaded'. I'm basically trying to say that 'DSP' isn't one giant pool of resource that can be used symmetrically for any purpose. It is composed of a number of orthogonal resources such as various types of memory, DMA engines, processing blocks, accelerators, etc. And not all functions (IRs, cabs, EQ, delay, reverb, synths, etc) use any/all blocks equally. So it is possible that one resource module is depleted by one set of Helix blocks and not by others.
  15. There are hardware acceleration modules in the DSP that may be used by some blocks such as IRs and cabs. Once those modules are being used to capacity, no more can be used. This is orthogonal to general DSP processing. So this limit is probably not arbitrary, but related to DSP resources that don't sit on a linear single dimensional scale of 'DSP usage'.
  16. The value of a famous guitarist's setup to the general public is in the fame of the guitarist/band, not necessarily the 'awesomeness' of their tone (which changes regularly and is dependent on other factors outside a single piece of gear like the Helix including the guitar/pickups/outboard gear/etc, and most importantly the guitarist's technique. If a guitarist wants to keep their setup totally secret, they need to play behind a black curtain, and pack everything up in a lock box whenever it's not being used. They also shouldn't let anyone hear their tone, because it's pretty easy to get a close facsimile of someone else's tone (and sometimes get an even 'better' tone) if you know what you're doing and have a recording (ie. that you made with your phone, or found on Facebook/etc). Just because someone might consider that something is 'wrong', doesn't mean it actually is. There are digital mixing consoles that have password options, but those are most useful for installed applications where the person setting them up isn't the same as the people using them. So I can set up a digital mixer with a couple of 'default' patches that allow the entire desk to be pre-loaded with EQ/dynamics/levels/etc so a completely inexperienced user can make simple adjustments like fader movements/etc. This doesn't apply to the Helix as patches need to be tweaked for each player. I prefer a simple fixed 'button lock' combination to lock/unlock a piece of equipment where necessary. And even those are annoying if the unit gets locked and you forget or don't know the button combo to unlock it - at least you can look up the manual to find the unlock combination (which will be common that brand and model of device). If someone was being malicious, they can always run a full factory reset on the device which wipes all the settings and clears the passwords. That option will always be available on a non-security oriented device like the Helix, otherwise customer support has to field all the "I forgot my password" calls.
  17. This feature has a negative value IMO. The risk element of someone stealing your patches is incredibly low. Even if they do, it's incredibly unlikely to negatively affect the Helix owner. Accidental modification might be an issue, but better solutions have been described above to solve this problem. That leaves malicious behaviour. Incredibly low chance of happening, and it's much easier to just pour beer over the Helix or stomp on the connectors if damage is the goal. The problem with password options is that they must be set if you want to prevent a malicious (or even well meaning) person from setting a password and locking you out of your own unit. The simple fact of its existence will inconvenience people that would never even consider using it.
  18. Thanks for finding that out! The info in your post is gold.
  19. Wireless digital transmission isn't just ADC/DAC stages. The data has to packetised and time set aside for retransmission(s) (and/or forward error correction) should there be an error. This is especially necessary in a crowded band like the 2.4GHz ISM band. Those functions can be as short as 1.7ms in something like the Line6 G70 or 12ms in the XVive U2, or as long as the few hundred milliseconds you see in Bluetooth A2DP audio. The low bandwidth handsfree profile is still on the order of 20ms. I'm not buying a product from overseas that's going to cost me $50 to return if it turns out I'm too sensitive to the delay. I don't like using amp sims in my iPad because of latency. I suspect this may be similar, but no published specs exist to convince me otherwise.
  20. A differential measurement would be done using a microphone (even the in-built mic) put up against the headphones: start the DAW recording, and play a track that contains something with a sudden onset that can easily be recognised in the recorded track. Measure the time between the playback and recording. Then repeat the same test with a wired headphone. Assuming the wired headphones are our 'zero' latency baseline, the difference between the two results will equal the latency of the wireless link. Unfortunately, in this part of the world stores don't have those great X day return policies (unless there is something defective with what you've bought, or maybe if the package is unopened and you're exchanging it for something else). There are a few exceptions, but those are brand dedicated shopfronts like the Bose and Apple stores.
  21. This probably because it needs to detect the FS1/FS7 combined press without activating one or the other first (depending on which you hit a few milliseconds before the other). Doing it on release gets rid of that problem.
  22. I'd love it if someone actually measured the latency with a DAW. And compare it to the same measurement made with wired cans. Sennheiser doesn't say what it is, and my friend couldn't use them for gaming, and said he needed to delay videos by around 20ms to match sync compared to wired headphones. Maybe he's super sensitive and overestimating. Maybe he's spot on. If he still had them I'd borrow and measure. If you have a DAW and a mic, many people including myself would be very appreciative if you could run up a quick test.
  23. Ground lift only affects the XLR outputs, because the XLR is a balanced output. You can't lift the ground on a single ended output like the 1/4" unbalanced out.
  24. An imperfect analogy on my part. You're right, the high frequency roll off with distance and extra multi path room acoustics makes it bit easier than a pure delay directly into headphones - or even into a local speaker that is only a few feet away. I had to sing right underneath an installed sound FOH speaker that was delayed by around 30ms to compensate for sub positioning, and that was really disconcerting. Not quite as bad if I'm singing and the speaker is 30 feet away. I can tell there's a delay, but the folded back sound feels less direct, and is usually not quite as loud.
  25. It is, but the difference with headphones is you don't hear the original signal, only the delayed signal, so it doesn't sound like an echo. It's like playing while standing 30 feet away from your monitor speaker. I find it distracting once the pure latency gets beyond about 25ms (less than 15ms I don't even notice, and up to 25ms I forget it's there after a while). Other people may have different thresholds before they consider the latency distracting. Pipe organists in a cathedrals and big churches have to deal with it when the pipes are 150' away from the console. They basically have to concentrate on their fingers/feet and ignore (or embrace) the delayed sound coming back to them 150ms later. That leads to slower passages with lots of legato, and tempos that are related to the inverse delay time. I like the suggestions above for using an wireless IEM system and plugging headphones into the receiver instead of IEMs (unless you prefer using IEMs).
  • Create New...