Currently Being ModeratedJun 11, 2011 4:09 PM (in response to audiofanatic)Re: Input-output audio latency?
(2nd edit, but hey as long as no one else responded yet I may as well correct my original post and pretend I didn't answer it in a hurry the first time.)
The TCDDK software and hardware guide with specifications are downloadable ( http://line6.com/software/readeula.html?rid=2432 ). The codec ( http://www.asahi-kasei.co.jp/akm/en/product/ak4552/ak4552_f01e.pdf ) analog filter group delay is 32 sample periods, then there's transmission time to and from the codec, 1 sample period each way, and buffered i/o copy time. Let's call it 36 sample periods total latency (if I'm way off someone should say so.) At 39.0625 kHz this is 0.92 ms, so it's right on the edge for your application.
Of course, you can increase the sample rate by modifying the clock division settings from the Line 6 template code, for lower latency but fewer compute cycles per sample.
Extremely low latency for realtime playing is one reason I prefer the ToneCore platform to writing DAW plugins, where you can easily encounter OS-incurred latencies of hundreds of samples.
Currently Being ModeratedJun 12, 2011 3:11 AM (in response to groxter)Re: Input-output audio latency?
Thanks for your updated response. (Actually, I had noticed about the original post that didn't seem to include all 32 samples of group delay.) For me, it was very useful to know which codec it is.
One question to consider is whether the software programming environment allows sound to be processed in blocks of 1 sample. For instance, it might have some minimum blocksize, either for software efficiency reasons, or due to requirements for communication over the serial bus, which of course also adds additional latency.