Please ensure Javascript is enabled for purposes of website accessibility Jump to content

Why 2 separate dedicated processing paths or "pools"?


Nos402
 Share

Recommended Posts

With the Helix, they mention that since each of the 2 paths has its own pool of processing power, that if you are going to have a lot of processing such as dual amps and/or lots of effects, it's best to use both paths to get maximum processing power.

 

So why not just have 1 big pool of processing power that just gets used up as necessary?

 

By way of example, let's say for simplicity that you have 8 effects/amp blocks that each use 10% of the unit's TOTAL processing power.

 

As it stands now, Path A has 50% of that power and Path B has 50% of that power, so each path could only accommodate 4-5 blocks before you'd need to start using the other path. 

 

Why not just make the processing completely dynamic? Use 100% on Path A, or  B or any combo of the 2 as long as it totals 100%. Why restrict each path to a maximum of 50% of the total processing power in the unit?

  • Upvote 1
Link to comment
Share on other sites

With the Helix, they mention that since each of the 2 paths has its own pool of processing power, that if you are going to have a lot of processing such as dual amps and/or lots of effects, it's best to use both paths to get maximum processing power.

 

So why not just have 1 big pool of processing power that just gets used up as necessary?

 

By way of example, let's say for simplicity that you have 8 effects/amp blocks that each use 10% of the unit's TOTAL processing power.

 

As it stands now, Path A has 50% of that power and Path B has 50% of that power, so each path could only accommodate 4-5 blocks before you'd need to start using the other path. 

 

Why not just make the processing completely dynamic? Use 100% on Path A, or  B or any combo of the 2 as long as it totals 100%. Why restrict each path to a maximum of 50% of the total processing power in the unit?

 

 

Beyond the programming challenges this might present (maybe it's not even possible without a complete reworking of Helix's firmware from the ground up), what would be the advantage?

 

I actually have an Idea in Ideascale for this: http://line6.ideascale.com/a/dtd/Dynamically-allocate-DSP-resources/795206-23508

 

I see at least two potential advantages, one is more flexible routing. I find that sometimes I have to alter the order of whether I put an effect before or after my amp(s), or start juggling the placement of my amp(s) depending on how the DSP fills up on each route (1 or 2). The second advantage is that when you do fill a route(pool) up, you probably don't use exactly the amount of DSP available, you just get close enough to the limit to not be able to fit another effect in. If you had both of those fractions from each route summed, it might be enough for an extra effect, especially one that used a small amount of DSP. I grant you the potential "extra" effect is of less note than the routing flexibility but the routing flexibility limitation is definitely one I have encountered in regular use.

  • Upvote 1
Link to comment
Share on other sites

With the Helix, they mention that since each of the 2 paths has its own pool of processing power, that if you are going to have a lot of processing such as dual amps and/or lots of effects, it's best to use both paths to get maximum processing power.

 

So why not just have 1 big pool of processing power that just gets used up as necessary?

 

By way of example, let's say for simplicity that you have 8 effects/amp blocks that each use 10% of the unit's TOTAL processing power.

 

As it stands now, Path A has 50% of that power and Path B has 50% of that power, so each path could only accommodate 4-5 blocks before you'd need to start using the other path. 

 

Why not just make the processing completely dynamic? Use 100% on Path A, or  B or any combo of the 2 as long as it totals 100%. Why restrict each path to a maximum of 50% of the total processing power in the unit?

 

 

I actually have an Idea in Ideascale for this: http://line6.ideascale.com/a/dtd/Dynamically-allocate-DSP-resources/795206-23508

 

I see at least two potential advantages, one is more flexible routing. I find that sometimes I have to alter the order of whether I put an effect before or after my amp(s), or start juggling the placement of my amp(s) depending on how the DSP fills up on each route (1 or 2). The second advantage is that when you do fill a route(pool) up, you probably don't use exactly the amount of DSP available, you just get close enough to the limit to not be able to fit another effect in. If you had both of those fractions from each route summed, it might be enough for an extra effect, especially one that used a small amount of DSP. I grant you the potential "extra" effect is of less note than the routing flexibility but the routing flexibility limitation is definitely one I have encountered in regular use.

 

I just noticed that I also mentioned the possibility of improved performance and reduced latency from pooled processors when I originally posted the Idea in Ideascale.

 

"Pooled processors can also potentially reduce latency issues like switching between presets, and the time it takes to process the guitar signal. The throughput speed can be increased because more inputs and outputs are available to pipe the signal with multiple processors pooled."  Data that might be piped through lets say for the sake of argument 8 input/outputs on one processor can now be split across and simultaneously sent to 16 input/outputs on two processors. In other words, you get the benefits of additional "pipes" on each processor, similar to the way many operating systems and software leverage multiple cores and even multiple processors on modern PCs.

Link to comment
Share on other sites

I don't see how routing flexibility would be changed at all by this. Essentially what's there now is what there would be if you were able to put 100% of the dsp (both processors) on one path. Nothing really changes. There would still be only one long path from start to end with the ability to add two splits and merges. Of course there's still the other options of multiple inputs and outputs for routing possibilities

 

I'm more curious why they allow for only one split and merge per processor. More of those would be extremely useful. But there's probably technical reasons none of us know about for why they did it the way they did, both processor allocation and limited splits.

 

No clue about any of the reduced latency stuff. Would it be enough to make any sort of perceivable difference?

 

One thing that's very likely is DSP processors are not at all similar to mass market computer processors in the way they are designed and programmed.

Link to comment
Share on other sites

I don't see how routing flexibility would be changed at all by this. Essentially what's there now is what there would be if you were able to put 100% of the dsp (both processors) on one path. Nothing really changes. There would still be only one long path from start to end with the ability to add two splits and merges. ...

 

See my point above about not having to juggle high DSP usage items like amps.  You would just slap them wherever you wanted without worrying about having to split them across routes which also dictates how much DSP each route has left for effects before or after the amp(s).  That way for instance, if you wanted more of your mod effects to be placed after your amp(s) and less before, you would not have to worry about where your amps were placed. You cannot generally for instance place three amps on the first route, even if you have few FX placed before the amps, you have to split the amps across routes because they are so DSP intensive, which means you have less DSP on your second route for FX after the amps.

 

...

I'm more curious why they allow for only one split and merge per processor. More of those would be extremely useful. But there's probably technical reasons none of us know about for why they did it the way they did, both processor allocation and limited splits.

...

 

 

Absolutely, more splits and merges allowed would be great! I have also wished they allowed these, they are particularly useful when you can't do scenes. They would have made programming several presets I would like to do possible.

 
 

....

No clue about any of the reduced latency stuff. Would it be enough to make any sort of perceivable difference?

 

One thing that's very likely is DSP processors are not at all similar to mass market computer processors in the way they are designed and programmed.

 

 

You are right to retain a healthy skepticism regarding latency until this approach has actually been attempted. I only know that pooling multiple processors (or cores) has a huge impact on throughput and program execution speed with other software and hardware. I suspect that DSP processors and audio software are not so different from other processors/software that they would not ultimately benefit from an approach that is time tested, yields huge benefits, and been around for years with other processors and software.

Link to comment
Share on other sites

See my point above about not having to juggle high DSP usage items like amps.  You would just slap them wherever you wanted without worrying about having to split them across routes which also dictates how much DSP each route has left for effects before or after the amp(s).

 

Ah ha. So it's the convenience of it.

 

I've quickly learned with Helix to split things up among the paths. And the more you use it, the easier it becomes to almost instantly recognize where or how much of something you can put on one path. I honestly have yet to worry about any dsp usage issues.

Link to comment
Share on other sites

Ah ha. So it's the convenience of it.

 

I've quickly learned with Helix to split things up among the paths. And the more you use it, the easier it becomes to almost instantly recognize where or how much of something you can put on one path. I honestly have yet to worry about any dsp usage issues.

 

I actually have seen the out of DSP message on several occasions, but some users may never see it, depends on how you use the Helix. Although the convenience of not having to juggle is nice, that is not my main point. The placement of high DSP usage items like amps which is determined by the max DSP available on each route actually dictates how much DSP there is before or after your amp placement. That issue has nothing to do with convenience, it is about the balance of how much DSP is available before or after multiple amp placements.

 

Right now, in order to use multiple amps you generally have to split them across routes. This leaves roughly an even amount of DSP available before and after the amps for FX placement. Sometimes you don't want an even amount of DSP before and after the amps. Sometimes you need more DSP after and sometimes you need more DSP before the amps, it depends on which FX you are using and where they are most ideally placed in the chain. That is the issue.

Link to comment
Share on other sites

So, a little OT and I apologize in advance, but I just want to make sure I'm understanding this right.

When you have two separate internal signal paths, I am correct in assuming that you can assign a footswitch to route the guitar input to either Path A or Path B?  So then you could have signal path A for acoustic sounds (if your  electric guitar has piezos - and mine does) and then hit the switch and it would route the signal to path B for electric sounds?  Is that correct?

 

I wouldn't be asking if the manual were a bit more clear about this stuff.  It's pretty vague as far as I can tell. Kinda ticks me off.

Link to comment
Share on other sites

So, a little OT and I apologize in advance, but I just want to make sure I'm understanding this right.

When you have two separate internal signal paths, I am correct in assuming that you can assign a footswitch to route the guitar input to either Path A or Path B? So then you could have signal path A for acoustic sounds (if your electric guitar has piezos - and mine does) and then hit the switch and it would route the signal to path B for electric sounds? Is that correct?

 

I wouldn't be asking if the manual were a bit more clear about this stuff. It's pretty vague as far as I can tell. Kinda ticks me off.

You absolutely can do this and it is actually another example where depending on how complex and how much DSP your preset uses you can be forced to juggle due to having two DSP pools instead of one combined pool. To be honest though I generally find that my acoustic path requires so much less DSP than my electric path that it is not an issue. I posted a template to do exactly what you are asking about in CustomTone. You can find it here:

 

http://line6.com/customtone/tone/1460280/

  • Upvote 1
Link to comment
Share on other sites

You absolutely can do this and it is actually another example where depending on how complex and how much DSP your preset uses you can be forced to juggle due to having two DSP pools instead of one combined pool. To be honest though I generally find that my acoustic path requires so much less DSP than my electric path that it is not an issue. I posted a template to do exactly what you are asking about in CustomTone. You can find it here:

 

http://line6.com/customtone/tone/1460280/

 

Sir, you are a gentleman and a scholar.  Thank you!

Link to comment
Share on other sites

I actually have seen the out of DSP message on several occasions, but some users may never see it, depends on how you use the Helix. Although the convenience of not having to juggle is nice, that is not my main point. The placement of high DSP usage items like amps which is determined by the max DSP available on each route actually dictates how much DSP there is before or after your amp placement. That issue has nothing to do with convenience, it is about the balance of how much DSP is available before or after multiple amp placements.

 

Right now, in order to use multiple amps you generally have to split them across routes. This leaves roughly an even amount of DSP available before and after the amps for FX placement. Sometimes you don't want an even amount of DSP before and after the amps. Sometimes you need more DSP after and sometimes you need more DSP before the amps, it depends on which FX you are using and where they are most ideally placed in the chain. That is the issue.

 

Ok. I think I see now where there could be limited use for something like this. So, for example, you want a dual amp setup using path A (1 and 2), but you also want a second, simpler, less dsp intensive setup using path B but still have all the routing options available on path B (1 and 2), which is not possible now because the path A setup would encroach into path B.

Link to comment
Share on other sites

Ok. I think I see now where there could be limited use for something like this. So, for example, you want a dual amp setup using path A (1 and 2), but you also want a second, simpler, less dsp intensive setup using path B but still have all the routing options available on path B (1 and 2), which is not possible now because the path A setup would encroach into path B.

 

Exactly! That would be one of several possible scenarios where it is more of a challenge to use the DSP with two routes (one DSP each) than it would be with one pooled route across two processors.

Link to comment
Share on other sites

Maybe having one processor capable of double duty with low latency was cost prohibitive for their target market?

 

We may be saying the same thing but I wanted to clarify just in case. I am not suggesting one processor doing "double duty" although I do think down the road we will see multi-core DSP processors. In a way, what I am proposing is the opposite, that two or more processors do "single duty". That internal input/output (throughput) and processing tasks be split across and simultaneously executed by multiple processors. I am speculating right now on the possibility of performance increases, additional routing capabilities and ease of use, and latency decreases, by pooling of multiple DSP processors (I realize "DSP processor" is redundant as the "P" in DSP stands for "processor"). 

Link to comment
Share on other sites

We may be saying the same thing but I wanted to clarify just in case. I am not suggesting one processor doing "double duty" although I do think down the road we will see multi-core DSP processors. In a way, what I am proposing is the opposite, that two or more processors do "single duty". That input/output and processing tasks be split across and simultaneously executed by multiple processors. I am speculating right now on the possibility of performance increases, additional routing capabilities and ease of use, and latency decreases, by pooling of multiple DSP processors (I realize "DSP processor" is redundant as the "P" in DSP stands for "processor").

As a matter of fact, it appears that Texas Instruments already has an eight core DSP, the C6678. I have no idea if this processor is even remotely suited to Line6's requirements but the price is about $79 per processor so this technology is probably going to keep coming down in price. Analog Devices, the company who makes the SHARC "ADSP-21469" DSP chips in the Helix (at least last time a reviewer opened up a Helix) also makes multi-core DSP chips although I don't believe the one in the Helix is. Digital_Igloo would know for sure.

 

 http://www.ti.com/lsds/ti/processors/dsp/c6000_dsp/overview.page?DCMP=DSP_C6000&HQS=ProductBulletin+OT+c6000dsp

Link to comment
Share on other sites

When it comes to Helix's dual-DSP architecture, there are generally four reasons why we didn't do X, Y, or Z:

  • It's impossible
  • It'd result in notably increased latency
  • It's really difficult and/or time-consuming and we have bigger fish to fry (every feature you guys request pushes back another request, which is why we have IdeaScale to help prioritize things)
  • The current implementation is required for future plans we can't talk about right now

Yes, being able to add whatever you want, wherever you want, on any path means that with some presets, you might have to do a bit of planning. The alternatives are all much worse—super-restrictive model allocation (one Amp+Cab, one delay, one reverb, that sort of thing), 30-50% fewer simultaneous effects, tons of wasted DSP, the inability to freely copy/paste blocks, etc.

  • Upvote 1
Link to comment
Share on other sites

...

Yes, being able to add whatever you want, wherever you want, on any path means that with some presets, you might have to do a bit of planning. The alternatives are all much worse—super-restrictive model allocation (one Amp+Cab, one delay, one reverb, that sort of thing), 30-50% fewer simultaneous effects, tons of wasted DSP, the inability to freely copy/paste blocks, etc.

 

Strongly agree, Line6's approach provides maximum flexibility given the current architecture. But architecture and building materials can change.... :D

Link to comment
Share on other sites

When it comes to Helix's dual-DSP architecture, there are generally four reasons why we didn't do X, Y, or Z:

  • It's impossible
  • It'd result in notably increased latency
  • It's really difficult and/or time-consuming and we have bigger fish to fry (every feature you guys request pushes back another request, which is why we have IdeaScale to help prioritize things)
  • The current implementation is required for future plans we can't talk about right now

Yes, being able to add whatever you want, wherever you want, on any path means that with some presets, you might have to do a bit of planning. The alternatives are all much worse—super-restrictive model allocation (one Amp+Cab, one delay, one reverb, that sort of thing), 30-50% fewer simultaneous effects, tons of wasted DSP, the inability to freely copy/paste blocks, etc.

 

Actually having one "super path" does not automatically imply restriction. It implies, however, dynamic (at patch design time) resource allocation between the 2 DSPs to optimize the processing power consumed by the blocks added. From a software architecture standpoint, processing resources should be ideally virtualized -- you never tell your computer which processor (or core) to run a process or a thread on, the OS takes care of that for you. This ideal solution, however, means that the operating system is more complex, and virtualization of the resources adds a little overhead.

There is little argument: a virtualized processing resource pool would be "better" from a patch programming standpoint. You'd never have to move bloks from one path to the other finding the optimal solution that accommodates them all within the available resource. It would however make the operating system cost more, be more complex (and therefore bug-prone), and it would probably "eat" away a little processing power (though not necessarily), it would certainly use more memory.

I never really cared much for the "dual path" approach, as I mostly use "complex" single paths, but I always found the compromise acceptable considering that the alternative would have been an SMP OS. That would only make sense, I think, if these devices ran a full fledged OS like (embedded) windows, android (not being realtime it would be a challenge), or similar -- a significant change in how Line6 operates at the moment. I know a few keyboards that work that way.

 

All in all, making things "right" at this stage would, I speculate, introduce many more headscratches than leaving the static DSP to path association. I believe things will change eventually in the future (not for the helix), but for the time being it seems like a perfectly working setting.

  • Upvote 1
Link to comment
Share on other sites

Actually having one "super path" does not automatically imply restriction. It implies, however, dynamic (at patch design time) resource allocation between the 2 DSPs to optimize the processing power consumed by the blocks added. From a software architecture standpoint, processing resources should be ideally virtualized -- you never tell your computer which processor (or core) to run a process or a thread on, the OS takes care of that for you. This ideal solution, however, means that the operating system is more complex, and virtualization of the resources adds a little overhead.

 

I'm not an engineer, but I was in all the meetings years ago where people way smarter than I discussed and debated this ad nauseum for weeks. Tons of "why don't we do this?" followed by "because when the user does x, y, or z, this breaks." "Oh crap, yeah."

 

First of all, embedded dual-DSP architectures are very different from multicore processing in UNIX/Windows. A single model cannot span multiple DSPs; every time you add or move a block where it might require a jump from one DSP to the other, all sorts of insanity ensues—latency could double, random short audio dropouts might happen, you might not be able to add a reverb before a delay, but you could after a delay—WHA?! There are always things we can do to improve things, and perhaps the next generation of SHARCs will have more robust management, but suffice to say, everything we did was for one or more of the four reasons outlined in my post above.

 

At one time, Helix had two home screens—one with a pedalboard, and one with a rack—and you'd press HOME to toggle between the two of 'em. It was an artificial restriction imposed on the user to make the separate DSPs appear even more separated, so any behavior would look like a design decision and not a technical one. We later decided that the workflow of putting both pedalboards on one page was vastly superior, and if/when people started getting geeky, we'd just deal with it.

Link to comment
Share on other sites

First of all, embedded dual-DSP architectures are very different from multicore processing in UNIX/Windows. A single model cannot span multiple DSPs;

...nor can a thread span 2 cores on a SMP system, no matter what the thread is doing (be it audio processing, video processing, or defragging your hard drive).

The architectures are mostly different because DSPs are not general purpose and have strict real-time requirements. That does not mean OS theory does not apply to them (as it applies to all embedded system, certainly not just DSPs) -- it is however a fact that an embedded system does not have all the resources available to a desktop OS (not that I was suggesting running Windows on a Helix!)

 

every time you add or move a block where it might require a jump from one DSP to the other, all sorts of insanity ensues—latency could double, random short audio dropouts might happen, you might not be able to add a reverb before a delay, but you could after a delay—WHA?! There are always things we can do to improve things, and perhaps the next generation of SHARCs will have more robust management, but suffice to say, everything we did was for one or more of the four reasons outlined in my post above.

Absolutely! That's what an OS does, resource allocation, and that is why it is all the more difficult with real-time (or near real-time) requirements like the Helix. I realize I sounded as if I was criticizing the way the Helix is engineered: I didn't mean to at all -- I was just making the point that it is theoretically possible and desirable, but that it is not necessarily feasible without overengineering, raising costs, prices and all things related. As a fairly steady Line6 returning customer, I don't think a bad engineering choice was made apart from leaving the on/off button out of the HD line. :)

 

At one time, Helix had two home screens—one with a pedalboard, and one with a rack—and you'd press HOME to toggle between the two of 'em. It was an artificial restriction imposed on the user to make the separate DSPs appear even more separated, so any behavior would look like a design decision and not a technical one. We later decided that the workflow of putting both pedalboards on one page was vastly superior, and if/when people started getting geeky, we'd just deal with it.

This was indeed the right choice, because now people at least have the ability to allocate resources manually to take advantage of both DSPs in a relatively, if not absolutely, transparent way. Joining the two paths in software is what I was missing from the X3 line.

  • Upvote 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...