Please ensure Javascript is enabled for purposes of website accessibility Jump to content

Pod Hd Bean D/a Converter Quality...?


cabir
 Share

Recommended Posts

hullo fellow citizens, i'm planning to take my music project live soon, and since the pod hd bean/desktop has balanced outs, i'm thinking of using it to send a stereo sum of everything straight to FOH... i'm going to be running a laptop setup with apple's mainstage, so there'll be backing tracks, live looping, midi parts, effects, and guitar ofcourse. it's kind of a one man band setup. i was thinking of sending a stereo summed mix to FOH from mainstage thru the hd bean's outs, and while the HD bean works great in my bedroom setup, i was wondering is anyone has any experience on the sound quality of the DA converters on the HD series. now i understand that the output of the HD amp tone sounds fantastic and that also involves the DA conversion, but i'm concerned with a slightly larger picture here in this case... there's no doubt that the HD amps and effects sound great going straight when playing live, but that is mixed with a ton of other instruments and the output is not heard in isolation. so from where i'm looking at it, since i'm sending the entire music thru it, i just want to be safe and know that the quality is not going to be short of amazing. another question, how loud are these balanced out any way, again, in a FOH setting. they're pretty loud at home. i read up something about the pod x3 series using cirrus logic converters, but there's no info on the hd series converters that i could find. would be great if someone could throw light on that too. i have a native instruments card with cirrus logic converters, and honestly, i really cant tell the difference between the NI interface and the L6 pod hd bean interface sound output wise. thanks guys, cabir.

Link to comment
Share on other sites

I don't have anything to say in regards to the DA converters (my ears aren't trained enough to hear these differences, yet). I HIGHLY recommend you rethink your summed output option, though. You've now removed the purpose of the FOH guy, and really, you're more than likely setting yourself up for failure. Every room is different. Every PA is different. You sending a stereo signal from the Pod to the FOH console is just going to be disaster. He will need independent control for your different parts; be it the backing tracks, the guitar tone, the MIDI stuff, whatever. Use your NI interface and separate as many of these elements to different outputs as possible. I've personally ran sound for a 2 person group that did something similar to what you're describing. Sure, it took 3 DI boxes, but trust me, it is a better option.

Link to comment
Share on other sites

I don't have anything to say in regards to the DA converters (my ears aren't trained enough to hear these differences, yet). I HIGHLY recommend you rethink your summed output option, though. You've now removed the purpose of the FOH guy, and really, you're more than likely setting yourself up for failure. Every room is different. Every PA is different. You sending a stereo signal from the Pod to the FOH console is just going to be disaster. He will need independent control for your different parts; be it the backing tracks, the guitar tone, the MIDI stuff, whatever. Use your NI interface and separate as many of these elements to different outputs as possible. I've personally ran sound for a 2 person group that did something similar to what you're describing. Sure, it took 3 DI boxes, but trust me, it is a better option.

 

 

thanks for the inputs hdprojohn, you have a fair point. this is what i think.... 

 

i've actually considered sending 4 stereos signals to FOH, with different layers etc, but the thing is the a lot my music is experimental, and the general production on every track is not the same. if every song had a bass guitar and drums playing in their respective places, i could give those out separately to the engineer, but the number of things that change in every track it so large, that for me to explain the whole set to a different engineer at every different gig would not be possible. infact, i very strongly feel that the engineer having control over my layers and guitar tone would actually introduce the probability of a screw up. 

 

yup, every room and PA is different, but think of it this way : if you like a song (anything from GnR or coldplay or <insert name here>), you don't take a different mix/master to play that track at different places. it's the same one. similarly, if i have a song which has 50 layers on average, it's not like different rooms will make the guitar louder or the drums softer, the different room would affect frequency bands and that can easily be rectified with an EQ either on my master bus or the FOH engineers desk. some venues would have too much reverb or too less, again, it's a simple adjustment in my master bus. so much so that once i play a venue, i could just save the master channel strip as a preset for the next time i play there, so the room acoustics are all compensated for, and i don't have to change things in the mix everytime.

 

having said that, apart from general sound production, i also mix and master music, so stems and channel strips from Logic pretty much loads up in Mainstage as is, and i don't have to worry about the mix being very different from the original production, except that i might choose to treat the production differently for a live setting, maybe make them a little less crowded. 

 

and there's a ton of musicians these days who're all one man acts, with a laptop and a bunch of midi devices and they handle everything themselves. i don't think it should really be a problem. (benn jordan/the flashbulb, joe gore, christian fennesz, dualist inquiry to name a few)

 

this is my pov, and i'm open to discussion about it. would be great if more people can chime in about this issue too. maybe i really am missing something. 
 
on a side note, i'm from india, and 9 out of 10 gig stories are about how the engineer just didn't know what he was doing. i spoke to one of the best in the biz live engineers here recently about some other questions i had, and the first thing he asked me when i told him about my setup/rig was "why do you want to give the engineer all that control in the first place. since you know your music better than them, just do it yourself." this is when i was still thinking of doing the 4 stereo tracks to FOH. 
Link to comment
Share on other sites

If you cannot hear any difference then any difference is negligible and wont be a concern for a live show.

 

It is only when you are considering the highest quality recording or mastering studio uses would you bother and you gotta spend way more than HD to get a noticeable difference. And it'l only be noticable on high end monitors in a properly treated room.

 

It is only recent that USB interfaces have improved as full duplex interfaces.

 

It is not clear to me how you route from the laptop, does it go into the Bean then combined and then out or does the laptop have its own interface.

The stock soundcards in any laptop are unsuitable for anything except playback and even then consumer, not professional reliability. 

 

How long are the cable runs to FOH?

 

A separate Professional Interface even a mixer one might give you the best control and reliability. It is one option if it is firewire or the latest in USB and then you could just go analogue in from the POD.

 

You might already have an acceptible rig setup.

Link to comment
Share on other sites

thanks bjnette... individual replies below...

 

If you cannot hear any difference then any difference is negligible and wont be a concern for a live show.

ya i can't hear any difference at a studio volume level atleast. 

It is only when you are considering the highest quality recording or mastering studio uses would you bother and you gotta spend way more than HD to get a noticeable difference. And it'l only be noticable on high end monitors in a properly treated room.

yep, i understand and agree. 

It is only recent that USB interfaces have improved as full duplex interfaces.

 

It is not clear to me how you route from the laptop, does it go into the Bean then combined and then out or does the laptop have its own interface.

The stock soundcards in any laptop are unsuitable for anything except playback and even then consumer, not professional reliability. 

well, everything is happening in mainstage running on the laptop, and the final output (stereo) is going out thru the pod as a sound card. so the laptop is connected to a bunch of usb devices which are all controlling various aspects of the music in MainStage, whether it's the mix, or midi instruments or looping or effects etc. the guitar is plugged into the pod, getting processed, but isn't going straight out. that too is being routed into mainstage for post effects and processing, which is being mixed with the master out. 

 

not using the laptop soundcard at all. maybe to connect a pair of headphones for a click track or something at the most. 

How long are the cable runs to FOH?

no sure yet, haven't started gigging. but there's balanced outs on the hd, so shouldn't be a problem. plus this length thing will vary from venue to venue anyway.  

A separate Professional Interface even a mixer one might give you the best control and reliability. It is one option if it is firewire or the latest in USB and then you could just go analogue in from the POD.

that could definately work too. i want to travel the lightest possible actually. so i'm probably going to get something like a korg nanokontrol and use it as a control surface for the mixer in mainstage, so almost everything will be done in the box. 

You might already have an acceptible rig setup.

thats what i thought ;D. 

 

thanks man. all help appreciated. 

Link to comment
Share on other sites

 

thanks for the inputs hdprojohn, you have a fair point. this is what i think.... 

 

i've actually considered sending 4 stereos signals to FOH, with different layers etc, but the thing is the a lot my music is experimental, and the general production on every track is not the same. if every song had a bass guitar and drums playing in their respective places, i could give those out separately to the engineer, but the number of things that change in every track it so large, that for me to explain the whole set to a different engineer at every different gig would not be possible. infact, i very strongly feel that the engineer having control over my layers and guitar tone would actually introduce the probability of a screw up. 

 

yup, every room and PA is different, but think of it this way : if you like a song (anything from GnR or coldplay or <insert name here>), you don't take a different mix/master to play that track at different places. it's the same one. similarly, if i have a song which has 50 layers on average, it's not like different rooms will make the guitar louder or the drums softer, the different room would affect frequency bands and that can easily be rectified with an EQ either on my master bus or the FOH engineers desk. some venues would have too much reverb or too less, again, it's a simple adjustment in my master bus. so much so that once i play a venue, i could just save the master channel strip as a preset for the next time i play there, so the room acoustics are all compensated for, and i don't have to change things in the mix everytime.

 

having said that, apart from general sound production, i also mix and master music, so stems and channel strips from Logic pretty much loads up in Mainstage as is, and i don't have to worry about the mix being very different from the original production, except that i might choose to treat the production differently for a live setting, maybe make them a little less crowded. 

 

and there's a ton of musicians these days who're all one man acts, with a laptop and a bunch of midi devices and they handle everything themselves. i don't think it should really be a problem. (benn jordan/the flashbulb, joe gore, christian fennesz, dualist inquiry to name a few)

 

this is my pov, and i'm open to discussion about it. would be great if more people can chime in about this issue too. maybe i really am missing something. 
 
on a side note, i'm from india, and 9 out of 10 gig stories are about how the engineer just didn't know what he was doing. i spoke to one of the best in the biz live engineers here recently about some other questions i had, and the first thing he asked me when i told him about my setup/rig was "why do you want to give the engineer all that control in the first place. since you know your music better than them, just do it yourself." this is when i was still thinking of doing the 4 stereo tracks to FOH. 

Don't really think there's a discrepancy about being from India or not. Either the FOH guy sucks or he doesn't - that's a universal fact. Here's my point against your rebuttal (and again, it's your music, your show, your fate, etc. Do as you wish, I'm just a random dude on the internet). I don't really care how experimental anything is, there are basics of music. Look at it this way, Dave Pensado (uber famous producer) breaks his mix sessions into 4 sub-groups: drums, vocals, music, effects. I don't care what you're running, your setup isn't going to be so experimental, or so abstract, that it can't be broken down into sonically grouped sections that could benefit from similar audio treatments and adjustments. These might change song to song, but so what? Point is, I wouldn't want you to send me a summed signal and all of a sudden you have an effect in there with sub-bass hits and I have no control to seat it in the mix. Or, perhaps, I'm worried that you're going to drive my bins too hard, or your settings for your high end might be jammin' at home on a synth, but in the room, you're destroying people's ears. If that's the case, and if I have a summed signal, you better believe I'm cutting whatever sounds awful out. Your ENTIRE mix is going to suffer by this one adjustment if either you make the change in your master buss or I do it at the console.

 

This point here, "you don't take a different mix/master to play that track at different places. it's the same one." is all wrong to me. The very purpose of mastering is so that a recorded song sounds the SAME across as many playback devices as possible. Your stems, your recorded loops, your MIDI pads, your drums, whatever, are not being mastered real-time. If they are, I would think it to be a bad idea for two reasons. 1) The strain and latency introduced by taxing your computer is going to prevent you from properly playing in time with everything, and 2) the mastering you perform from stage isn't relevant. You are on stage, not in the room, walking around, hearing how the changes you make on stage actually sound in the room. You can't be in 2 places in once, unless you've got an iPad synced up and are able to control all of your plugs. And really, you are no longer mastering. You are using compressors/limiters/plugs in a 'mastering fashion' and adjusting them to a particular room, venue and PA. 

 

Again, this isn't about a particular instrument or sound sticking out more than the other because of the room, it's about needing individual control of these aspects to make the best mix for that room. Honestly, I just don't think anybody has better capability from stage than an FOH guy (granted he's not tone deaf).

Link to comment
Share on other sites

Don't really think there's a discrepancy about being from India or not. Either the FOH guy sucks or he doesn't - that's a universal fact. Here's my point against your rebuttal (and again, it's your music, your show, your fate, etc. Do as you wish, I'm just a random dude on the internet). I don't really care how experimental anything is, there are basics of music. Look at it this way, Dave Pensado (uber famous producer) breaks his mix sessions into 4 sub-groups: drums, vocals, music, effects. I don't care what you're running, your setup isn't going to be so experimental, or so abstract, that it can't be broken down into sonically grouped sections that could benefit from similar audio treatments and adjustments. These might change song to song, but so what? Point is, I wouldn't want you to send me a summed signal and all of a sudden you have an effect in there with sub-bass hits and I have no control to seat it in the mix. Or, perhaps, I'm worried that you're going to drive my bins too hard, or your settings for your high end might be jammin' at home on a synth, but in the room, you're destroying people's ears. If that's the case, and if I have a summed signal, you better believe I'm cutting whatever sounds awful out. Your ENTIRE mix is going to suffer by this one adjustment if either you make the change in your master buss or I do it at the console.

 

well, i guess you're right about the sonically group sections. there's really can't be any argument against that. 

 

 

This point here, "you don't take a different mix/master to play that track at different places. it's the same one." is all wrong to me. The very purpose of mastering is so that a recorded song sounds the SAME across as many playback devices as possible. Your stems, your recorded loops, your MIDI pads, your drums, whatever, are not being mastered real-time. If they are, I would think it to be a bad idea for two reasons. 1) The strain and latency introduced by taxing your computer is going to prevent you from properly playing in time with everything, and 2) the mastering you perform from stage isn't relevant. You are on stage, not in the room, walking around, hearing how the changes you make on stage actually sound in the room. You can't be in 2 places in once, unless you've got an iPad synced up and are able to control all of your plugs. And really, you are no longer mastering. You are using compressors/limiters/plugs in a 'mastering fashion' and adjusting them to a particular room, venue and PA. 

 

Again, this isn't about a particular instrument or sound sticking out more than the other because of the room, it's about needing individual control of these aspects to make the best mix for that room. Honestly, I just don't think anybody has better capability from stage than an FOH guy (granted he's not tone deaf).

 

the mix/master point wasn't about mastering the live performance on a laptop in realtime at all..! it was, at the very basic level, that every element playing out of the laptop, whether a backing track, or a midi instrument, or a guitar or a loop would be from a mix/master that translates well across different speakers and venues, with enough room to compensate for any margin for error that would be, because it would be playing out of a channel strip that has the settings to blend it well with the rest of the mix. when i said i mix and master music, i generally meant that i understand balancing instruments and frequencies, and i have enough faith in my work for "or your settings for your high end might be jammin' at home on a synth, but in the room, you're destroying people's ears." to not happen. 

 

the idea behind this whole thing was that as a solo performer, i'd be playing a lot of small venues where they might not have a full time FOH engineer. they might have a PA or get their musicians to get their own PA(hire, rental, whatever), but for such cases one needs to be prepared. which is why i mentioned some of the solo acts perviously, all of whom do exactly that. sure, nothing replaces the sensibilities a good FOH engineer for a gig. 

 

 

your point overall is still very valid, and as of reading this i'm once again swaying more towards sending more than just a summed stereo mix to FOH. thank you for that, i genuinely mean it. 
Link to comment
Share on other sites

Cool. I'm not trying to be a jerk or bust your balls, either. Just trying to make some points clear. If you have the ability to run your show with multiple outs, and there is an FOH guy present, I'd go that route. If you're forced to go completely DIY on small gigs, run a summed out and set your levels as best as you can within your DAW. Bottom line, both will work. The more outputs, the better will be your result - granted you have someone monitoring and mixing them for the duration of the show. Perhaps you should look into creating two separate sessions depending on the gig. I.E. having one using the NI interface with multiple outs and a duplicate session, but with everything summed, output through the bean, which is intended for the DIY route.

Link to comment
Share on other sites

Cool. I'm not trying to be a jerk or bust your balls, either. 

 

haha, no i didn't take it that way either. i'm almost always open to discussion. 

 

 

Just trying to make some points clear. If you have the ability to run your show with multiple outs, and there is an FOH guy present, I'd go that route. If you're forced to go completely DIY on small gigs, run a summed out and set your levels as best as you can within your DAW. Bottom line, both will work. The more outputs, the better will be your result - granted you have someone monitoring and mixing them for the duration of the show. Perhaps you should look into creating two separate sessions depending on the gig. I.E. having one using the NI interface with multiple outs and a duplicate session, but with everything summed, output through the bean, which is intended for the DIY route.

 

the points are valid both ways yes. infact just yesterday i when i was setting up the mainstage rig, i actually did throw in a few channel strips to serve as categories for bass, drums, rhythm, rhythm add-ons, keys, guitars and route all DAW layers into these. and the idea to include these was to use them as sends to different outputs for FOH, or to send them all to master and get a single stereo. 

Link to comment
Share on other sites

thanks akeron. i've seen those posts before, but they seem more like speculation to me. there's really nothing to confirm that. everyone's linking to that cirrus logic converter, and i like the pod's output sound, so i'm not saying that it sounds bad at all, but it's hard for me to believe that the converters in apogee's flagship line and line6's are the same. the only way that would be possible is if the converters technology has reached a point where even the most basic converters can do what the higher end one can, or that apogee is openly fooling everyone saying they're using the best components out there. 

 

and i can't see much in those images. if i could see cs4274 written on some IC, it would make sense. 

 

thanks anyway. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...