Latency – A Brief Guide

Latency is, put as simply as possible, the delay between something happening and it becoming apparent that it has happened. In the audio world, it’s the time difference between you pressing a button and the sound that button makes becoming audible, and it’s measured in milliseconds (ms). It stands to reason that the more latency there is between that button press and the resultant audio, the less accurate you can be with your control of when that audio sounds, but how much latency can we put up with before our performance is affected?

THE MYTHS

There are a few common misconceptions about audio latency that we’ll debunk now:

  • Hardware has lower latency than software. An increasing, and incorrect, opinion is that analogue/digital is analogous to hardware/software. Whilst its true that analogue equipment doesn’t really have discernable levels of latency, any digital equipment will have to contend with latency in its architecture.
  • Macs are better than PCs. It’s true that straight out of the box Macs tend to be easier to set up. The built in support for music hardware in their operating systems is stronger in OSX, but most audio hardware will have ASIO drivers – a third party driver system that allows direct and low latency connection between the system and the audio – written for it for use in Windows. ASIO provides comparable and sometimes even better latencies than OSX’s CoreAudio drivers.
  • Any latency is unacceptable. Latency is almost totally unavoidable for one very simple reason: the speed of sound is a bottleneck. As a very rough calculation, every foot you travel from a sound source will add 1ms latency to your setup. Are your monitors ten feet away from you? Then you’re looking at 10ms latency before you even start to factor in latency introduced by your equipment.

THE SOURCE

Those three little factoids out of the way, it’s time to look at the major sources of latency you’re likely to encounter in your conquest to rid your system of as much as possible:

  • MIDI. MIDI is digital data; it’s largely down to the drivers in your system as to how it’s handled. The good news is that MIDI, having been around for 30 years or so, has been pretty much mastered by manufacturers, and latency added by MIDI processing tends to be <2ms.
  • Digital Sound Processing. Analogue systems create next to no latency – the electrical signals just zap through them at about two thirds of the speed of light. Digital systems on the other hand have to convert audio into digital information, process it, and then convert that information back into audio. This is a pretty big part of the latency you’ll experience with your system, and there are two areas that will make a difference: the audio interface and your computer. Efficiently constructed audio interfaces will feature low latency conversion hardware, and efficient drivers will reduce the amount of time it takes for the converted signal to communicate with your computer down to less than 2ms. At the computer end, the efficiency of your software at dealing with audio and the power of your hardware at creating that audio stream is vital to overall latency levels. Because it’s not possible to guarantee that there will be a zero percent error rate in a signal, the software creates a buffer to ensure that any errors can be corrected before they reach the critical stage without interrupting the audio stream. The smaller this buffer, the lower the latency. A buffer of 32 samples at 44.1kHz will theoretically create a latency of 0.72ms (44.1kHz/1000 = 44.1 samples/ms. 32/44.1 = 0.72ms). At 88.2kHz this is halved, but the processing power required to calculate the higher sample rate is increased.
  • Interfaces. USB and FireWire are the two most common interfaces used for external connections and both introduce a small amount of latency as they syncronise with the main system. FireWire has a more efficient method of sending data than USB and so is capable of less – and PCI is to all intents and purposes latency free, but until Thunderbolt it is an internal connection only.
  • The speed of sound. I mentioned it earlier – the speed of sound is a significant factor in perceived latency. At room temperature, in dry air, the speed of sound is 340.29m/s at sea level. That translates to more or less 1116ft/s, or 1.1ft/ms. Move 22 feet away from your speakers, and you’re hearing sound 20ms later compared to what you hear in your headphones, which puts the 1ms you get from MIDI latency into perspective.

THE REAL WORLD

So real world latencies might look something like this – imagining we’re four feet away from a monitor speaker:

  • Analogue turntable, vinyl, analogue mixer: 4ms speaker distance.
  • Analogue turntable, timecode vinyl, analogue mixer and Traktor Scratch running at 64 sample buffer at 96kHz: 1ms USB bus buffer + 1.5ms conversion of timecode audio to digital signal + 0.66ms sample input buffer + 0.66ms sample output buffer + 1.5ms conversion of digital audio to analogue +1ms USB bus buffer + 4ms speaker distance = 10.3ms.
  • MIDI controller, audio interface, Traktor running at 64 sample buffer at 44.1kHz: 1ms MIDI + 1.45ms sample buffer + 1.5ms conversion of digital audio to analogue +1ms USB bus buffer + 4ms speaker distance = 9ms

The values are all approximates, but as you can see, MIDI control has the advantage of not having an input stage. If the above examples were toe to toe in terms of sample rate then there would be even less latency in the MIDI controller setup. USB buffer times are fairly approximate, and in bad scenarios can be much higher, too. With a less powerful computer, you’ll have to increase your sample buffer whilst lowering your sample rate, which will increase latency. Add to that the fact that you might be further away from your monitor (or might not even have one… the horror!), your mixer may well have digital stages that eke up the latency by a further ms, you may be using CDJs – which add their own tiny amount of latency – with timecode, and it all starts to really mount up.

ON THE BRIGHT SIDE

The good news is that our heads… well, most of our heads, are a lot more clever than we sometimes give them credit for. Latency of less than around 15ms is only disorienting for a few seconds before we get used to it; whilst we may sort of notice it’s there, we don’t have to think about adjusting for it. As long as we can keep latency in our systems to a minimum so that we don’t end up compounding what would be an acceptable speaker distance latency for analogue systems, our performances won’t be affected.

Check out our article on USB3 and Thunderbolt for a look into the future of latency reducing interfaces!

analogueaudioDigitallatencymidi
Comments (92)
Add Comment
  • Dj Setup Guide | Computer DJ Midi

    […] Latency – A Brief Guide – DJ TechTools – DJ TechTools The largest community for DJ and producer techniques, tutorials, and tips. Traktor secrets, controller reviews, a massive MIDI mapping library … […]

  • JhelaPeh Atatalyc

    If you haven’t overclocked your processor yet, we’d recommend that you do that first, and the guide we just mentioned will teach you everything you’ll need to know to be able to do that.

    Before you get started, it’s good to fully understand the basics of how everything works, so you’ll know what you’re doing. In this section, we’ll go over what determines the RAM speed and how the memory timings work.

  • Conrad

    i just wanted to say thank you for all of you precice and easy to understand tech guides. 

  • Manus_2006

     Brilliant article. Was always in a pickle to understanding latency. Cheers

  • DJ Bernie (Brazil)

    Hi Chris!! Thanks for answering my question man. It does make lots of sense now. I was seeing things in a wrong way!!!

    Thanks for the patience!!

    Have a good one!!!

    Cheers!

    DJ Bernie

    • LTR.

      im really starting to notice a correlation between speak-before-think stupidity and passive aggressive self righteousness.

  • darasan

    I think you’re confusing different types of buffers.

    “I always thought a buffer was a component that makes up for inconsistancies in system timing/ data flow.”

    Yes, this type of buffer is called a circular or ring buffer. The amount of data in this buffer will shrink or grow depending on what speed the hardware processes the data at.

    However for many audio drivers (especially ASIO drivers), there is a strict requirement for perfect synchronisation between input and output. In this case, circular buffers cannot be used because the input and output buffers cannot be synchronised (sometimes full, sometimes empty).

    So…a double buffer system is used. This means that as one buffer is being played back by the hardware, another buffer is being filled by the software so that it can switch seamlessly to the next block of audio. When the hardware has finished playing back one buffer, it will signal to the PC that it needs another buffer, and so the next buffer is sent.

    In this way, the driver can queue a number of buffers if necessary.

    “The actual buffer isn’t the issue for latency, it is the data feed into it.”

    No…the bigger the buffer, the less often the “give me another buffer” signal is sent to the PC, and the audio latency becomes bigger. At a latency of 10ms, this “give me another buffer” signal is sent every 10ms.

    “So Gabor saying, “the sound card waiting for the current buffer to end”
    seems to totally go against what I’ve learned about buffers.”

    He’s right…hope my explanation helps 😉

  • darasan

    I think you’re confusing different types of buffers.

    “I always thought a buffer was a component that makes up for inconsistancies in system timing/ data flow.”

    Yes, this type of buffer is called a circular or ring buffer. The amount of data in this buffer will shrink or grow depending on what speed the hardware processes the data at.

    However for many audio drivers (especially ASIO drivers), there is a strict requirement for perfect synchronisation between input and output. In this case, circular buffers cannot be used because the input and output buffers cannot be synchronised (sometimes full, sometimes empty).

    So…a double buffer system is used. This means that as one buffer is being played back by the hardware, another buffer is being filled by the software so that it can switch seamlessly to the next block of audio. When the hardware has finished playing back one buffer, it will signal to the PC that it needs another buffer, and so the next buffer is sent.

    In this way, the driver can queue a number of buffers if necessary.

    “The actual buffer isn’t the issue for latency, it is the data feed into it.”

    No…the bigger the buffer, the less often the “give me another buffer” signal is sent to the PC, and the audio latency becomes bigger. At a latency of 10ms, this “give me another buffer” signal is sent every 10ms.

    “So Gabor saying, “the sound card waiting for the current buffer to end”
    seems to totally go against what I’ve learned about buffers.”

    He’s right…hope my explanation helps 😉

  • darasan

    I think you’re confusing different types of buffers.

    “I always thought a buffer was a component that makes up for inconsistancies in system timing/ data flow.”

    Yes, this type of buffer is called a circular or ring buffer. The amount of data in this buffer will shrink or grow depending on what speed the hardware processes the data at.

    However for many audio drivers (especially ASIO drivers), there is a strict requirement for perfect synchronisation between input and output. In this case, circular buffers cannot be used because the input and output buffers cannot be synchronised (sometimes full, sometimes empty).

    So…a double buffer system is used. This means that as one buffer is being played back by the hardware, another buffer is being filled by the software so that it can switch seamlessly to the next block of audio. When the hardware has finished playing back one buffer, it will signal to the PC that it needs another buffer, and so the next buffer is sent.

    In this way, the driver can queue a number of buffers if necessary.

    “The actual buffer isn’t the issue for latency, it is the data feed into it.”

    No…the bigger the buffer, the less often the “give me another buffer” signal is sent to the PC, and the audio latency becomes bigger. At a latency of 10ms, this “give me another buffer” signal is sent every 10ms.

    “So Gabor saying, “the sound card waiting for the current buffer to end”
    seems to totally go against what I’ve learned about buffers.”

    He’s right…hope my explanation helps 😉

  • DJ Bernie (Brazil)

    Hello everybody!!! Need some help over here!!! I’ve been reading the article again and again and couldn’t compreehend why when we set the sample rate to 82.2kHz the latency created is lower than when set at 41.1kHz. If the system is generating a higher rate of samples (82.2) whilst keeping the same buffer (32) , how can the buffer analyses all the content sent at a higher speed than previously set at 41.1?

    Thanks!!!Big fan of DJTTKeep going guys!!!

  • DJ Bernie (Brazil)

    Hello everybody!!! Need some help over here!!! I’ve been reading the article again and again and couldn’t compreehend why when we set the sample rate to 82.2kHz the latency created is lower than when set at 41.1kHz. If the system is generating a higher rate of samples (82.2) whilst keeping the same buffer (32) , how can the buffer analyses all the content sent at a higher speed than previously set at 41.1?

    Thanks!!!Big fan of DJTTKeep going guys!!!

  • DJ Bernie (Brazil)

    Hello everybody!!! Need some help over here!!! I’ve been reading the article again and again and couldn’t compreehend why when we set the sample rate to 82.2kHz the latency created is lower than when set at 41.1kHz. If the system is generating a higher rate of samples (82.2) whilst keeping the same buffer (32) , how can the buffer analyses all the content sent at a higher speed than previously set at 41.1?

    Thanks!!!Big fan of DJTTKeep going guys!!!

    • Chris

      Hi DJ Bernie,

      the buffer size is in samples – a sample is a note of the wave position, so when we talk about ‘sample rate’, we mean how many times per second the wave position is recorded. Thus, when 32 samples are used as a buffer at 44.1kHz they represent twice as long in time as 88.2kHz. Does that make sense?

  • KLH

    This question was never directly answered: “How much latency can we put up with before our performance is affected?”

    IIRC, studies with drummers (because their sense of timing is usually better than most) have shown that latencies 20ms impact the ability to perform well.

    -KLH

    • dj distraction

      > This question was never directly answered:
      > “How much latency can we put up with before our performance
      > is affected?”
      I guess that’s a case to case basis.
      I just checked my TP2 & S4 settings (on a MBP 2.4GHz i5), it’s set at 8.2ms. I’ve been using this setup since I bought them and I never experienced weird things about latency.
      At some gig, I used this together with Modul8, which eats up a lot of CPU & memory…but everything is fine.
      The best thing is bench mark your system, load up all 4 decks (if you have 4 decks), ON the key lock, ACTIVATE all the filters, etc., then adjust your latency settings. The moment you experience something weird, then just a little more latency than that setting will be your system’s optimal latency setting.

  • DJ Distraction

    Great info!
    But at the end of the day, the people you are entertaining, dancing to your tunes, doesn’t care about latency.
    Especially, when the beer goes down and the volume goes up, they don’t give a sh*t about latency.

    • Jenhenry

      keep boozing and take home the fat bird leave the dj stuff to us mate

      • darasan

        He’s your customer, mate. Don’t patronise your customers!

      • darasan

        He’s your customer, mate. Don’t patronise your customers!

      • darasan

        He’s your customer, mate. Don’t patronise your customers!

    • Jenhenry

      keep boozing and take home the fat bird leave the dj stuff to us mate

    • Jenhenry

      keep boozing and take home the fat bird leave the dj stuff to us mate

  • Smegma-Head

    You also called fries “french fries” and renamed them “freedom fries” even though “pommes frites” were invented in Belgium. Greets from Switzerland (I guess you call it Sweden)…

    • Max le Daron

      In fact the first name was “Trench Fries” and with a mispronunciation they became French Fries…

    • Max le Daron

      In fact the first name was “Trench Fries” and with a mispronunciation they became French Fries…

    • Max le Daron

      In fact the first name was “Trench Fries” and with a mispronunciation they became French Fries…

  • Wikkid

    I see a couple of people mentioning DPC Latency Checker application for Windows. What these people need to realize is that this tool can not be used to measure latency in terms of this article. This application measures the latency of DPCs(deferred procedure call) only, it doesn’t bother with anything else. The one use of this program that I found was to help me diagnose why I’ve been experiencing audio-dropouts. And it showed me that indeed DPCs were at fault with periodic extremely high spikes. Upon messing about some with drivers and services I’ve been able to eliminate DPC latency spikes to never go above 1k MICROseconds. But that is just DPC latency, everything else gets added on top of that.

    Regarding the whole latency topic:
    Some people make latency sound like its the end of the world. Like its impossible to perform with latency present. I think those people mis-understand what latency is. It’s something that you as a performer may notice when it gets somewhat high. But as long as you’re able to adjust for it the audience won’t notice anything. It’s simply the time between pushing play and hearing the song start, if you know how your gear responds then you will adjust for that latency effectively negating most of it.

    Props to Boxbomber for describing that little experiment, I think that everybody who has any interest in latency should do a similar one just to get a feel for the numbers. As it is most people seem to take it a simple ‘less is better’ attitude, but unless you’re very very sensitive or extremely well trained, and have very good hearing, chances are you won’t notice any latency below 10ms. The vast majority wouldn’t notice any latency below 20ms(according to Boxbomber it may be as high as 70ms.) These are all numbers well within the realm of digital DJ’ing today.

    Latency as a whole is a much over-emphasized concept. Just forget about it, play your music instead of watching latency meters. If the music is good who gives a sh!t about your latency.

  • Dan Dillard

    Anyone know of a program to measure latency on OS X? I know when I was in the PC (Windows) world DPC Latency was a constant fixture in my setup. I’m not entirely too worried about it as I haven’t had any issues with this brand new MacBook yet, but I am the kind of geek who likes to be able to quantize things.

  • Djcl.ear

    Ok, I’d guess that using Ableton’s Live means DSP induced latencies might become quite more relevant.
    Simply put, in such setup the midi controller sends the order to Live which orders the reading of the tracks or files from the Harddisk (HDD). Digital data then travels thru the Southbridge chipset and sequentialy thru Northbridge (unless recent Sandbridge CPUs are used) to get to the center processing area, namely CPU and RAM.
    There the software processes and mixes whatever th DJ orders it to do and so the digital audio stream leaves the central area and again crosses North and Southbridg before it goes into the outdoor (USB, Firewire, etc) and into convertion.
    All this involves time usage depending on various variables, as the article too briefly states; hard and software located.
    The hardware main bottleneck is the HDD latency. In theory it is negligibe, but add multiple requests from programs, active services, fragmentation,etc and may become a latency factor. In the real world just replace it with a SSD and watch everything going snappier.
    In the software side, without going into plug-ins, its architecture design will determine latencies and the software obvious actual functioning… Ableton forum is bloated with real memory overrun problems. At 2011 specialized audio forums strongly recommend using the DPC latency checker program to pinpoint where your system latency lies (causes glitches and slows results).

    Thus, Soft and Hard generated system latency remains a problem in most audio systems. This article needs expansion.

  • marviedisku

    if u guys running trakror pro 2 i would recommend copping TRAKTOR PRO / 2 Bible, it does an awesome job explaining LATENCY stuff …. very helpful !

  • cabdoctor

    I think what a lot of commentators are missing is that the title of this article is “Latency – A Brief Guide.” Its point is to tell us yes latency is important but its not the end all be all of the digital DJ world. I’m sure they could have done a 10 page article with graphs, charts, maybe a word search and place for you to color on this subject. But for 99% of us that visit this site it’s not necessary.

    • Ha!

      So funny!

  • Djo1006

    Very informative and interesting. Good job, once again DDTT and it’s fans. I love this site!!!!!!

  • D-Jam

    Thank you for writing this article Chris…especially for disparaging the myths. I grow tired of some people who seriously think any DJ who isn’t on a Macbook with either Serato or Traktor is somehow “not for real”.

    It’s the honest truth that what kind of computer you use (in terms of hardware) and what you’re trying to do is what will judge it all. It’s why I tell scratch DJs to use Serato over the rest, because their drivers handle latency the best. However, it doesn’t mean Traktor, Torq, VDJ, and the rest are suddenly irrelevant.

    It’s why I tell all the guys who get angry with their PCs that you can’t make magic happen on a cheap $500 laptop, and especially how both Windows and Mac lovers need to know their OS inside and out…so they can easily make it run better on their own.

  • Gábor Szántó

    Actually, the american system is the older, royal one (british based), while the metric came much later (french).

  • Brady Bush

    This is all good info. I would like to see you take this a step further and discuss the techniques used to fix latency issues. Is it cut and dried, move a slider in Traktor until it sounds better? If you DJ in a club, you have limited control of monitor distance and cable length…what can you do to make it work?

  • Chris Jennings

    Writing these kind of articles is tough, because on one hand you need to dumb down the technical talk for the common folk. After you do that though the geeks and nerds all cry foul. No offense to the geeks and nerds. Great write up Chris.

  • John Doe

    Terrific article. Another myth is that increasing the sample rate decreases latency. Many Traktor users incorrectly believe that to be true (probably due to somewhat confusing reporting of latencies by NI products).

    • Dj UTU

      Well, increasing the sampling rate does indeed decrease the perceived latency. However, it is done with the expense of putting more data through (the CPU as well as the Bus) in the same amount of time. Naturally this requires more processing power and can possibly lead to the Bus not having enough bandwidth to handle all the data pouring in.

    • Tobamai

      Increasing the sample rate can, in theory, decrease the system’s latency. If the system functions properly at the higher sample rate, then for any given buffer size you will have a lower latency.

      At a sample rate of 44khz a sample is taken every 22.7 µs (microseconds). If your buffer is set to 256 samples, the buffer will be flushed every 5.8 ms. At 48khz you get a sample every 20.8 µs and with the same 256 sample buffer it should be full every 5.3 ms.

      The difference is negligible, but it does result in lower latency if the system can actually process all the data in time.

  • Anthony Woodruffe

    One final addition to the chain is cable length. I don’t mean the difference between a 50cm USB cable and 1m cable. I’m talking about Amp to speaker cabling. It’s probably the last thing on a club DJs mind as you never really see them but of course are aware of. As a mobile DJ you’re handling them on a regular basis. I’ve read a little but not much but I believe 15m/50ft. of XLR cabling will add 2ms of delay.

    • John Doe

      Nope, speaker cables don’t add latency. Electromagnetic waves travel at (close to) the speed of light.

      • Dj UTU

        Yes. Unless you’re planning on having your speaker cables around the world, the “speaker cable length induced latency” will never reach above “few dozen picoseconds” = “insignificant amount” 😉

  • Gábor Szántó

    Sorry Chris, but the reality for MIDI controllers and DVS systems are much worse, your calculation doesn’t take many factors into account.

    Every MIDI controller has some internal processing time, coming from the polling interval and/or interrupts, plus the USB/MIDI transfer overhead. A theoretical MIDI transfer overhead is 2 ms alone minimum. So let’s calculate with a very optimistic estimation of 5 ms for the MIDI controller alone, which includes very fast internal processing and proper MIDI implementation both on the hardware and software sides. (5 ms so far)

    Then comes the software on your laptop. It receives the controller message and passes into the djing software’s event processing mechanism, which is running on a very high importance thread probably. This thread must “fight” for resources, even if it’s a very important one with very high priority, calculating 1 ms for receiving the event is optimistic again. (6 ms so far)

    The djing software runs an even higher priority audio processing callback in every, let’s say, 1 ms (super low buffer setting). So after the event received, we must wait until the callback comes again, somewhere in the next millisecond. So this is 1 more ms to count (worst condition), or 0.5 ms (average). Then it creates the next chunk of audio within a few hundred microseconds, and passes it to some layers of the operating system (even if it’s the lowest, the HAL). (6.5 ms so far)

    The audio chunk makes it’s way to the sound card through the main board’s bus systems, then maybe USB and Firewire. Let’s be optimistic, add 1 ms. (7.5 ms so far)

    The sound card waits for the current buffer to end, processes the audio chunk, passes to the D/A converter. D/A conversion alone is around 1 ms, so let’s add 1.5 ms (optimistic) again. The audio appears “as analogue” on the connectors. (9 ms so far)

    Add 4 ms speaker distance, and we are at 13 ms already, with a very-very optimistic calculation. Of course you set 1 ms in your djing software, but we are at 13 already, which happens only at the most fortunate scenario. The reality is much more 20+ ms, using your best laptop, best sound card, fastest transfer, lowest latency settings. For most of the users with Macbook Pros and not-so-shiny setups, we often measure 40-50 ms, 20 cm from the speaker.

    The best scenario for timecode vinyl is just 2-3 ms better.

    The magical number is 9 ms latency (without the speaker distance), everything equal or below 9 ms feels is “instant” to your brain, analog turntables with analog mixers are usually around or below this range. Digital systems in reality are not close to this, i heard that under special laboratory circumstances the Serato guys were able to reach 8 ms, but that’s very-very special.

    Don’t get me wrong, i don’t say it’s a big problem, your brain can adapt to even a huge 150 ms latency (you will feel it tough), i’m just correcting numbers here. The magic “1 ms” setting in your djing software looks good, but that’s all.

    • John Doe

      Yeah, I might as well argue that your calculations are off, too. The 5ms might be true for an USB-MIDI controller (although it seems a tad high), but certainly aren’t right for the latest-gen controllers which use different protocols. Tunneling MIDI over USB is relatively slow. But other protocols (e.g., USB-HID) are faster.

      • Gábor Szántó

        Every controller must have some internal processing with it’s own internal polling / interrupts, then send the value through something. USB HID is not a good example, as it’s the same system where your mouse and keyboard is coming, and the OS’s HID parser is the first to process it.

        The best is an integrated driver in the djing software, communicating directly over USB, but due to the internal polling and bus times, don’t expect anything below 2 ms from button push to the djing software’s event loop. That’s only 3 ms better…

      • Craig Reeves

        MIDI is as fast as the transport mechanism it’s using, and your observation that MIDI over USB is slow is incorrect. In fact, it’s FASTER than HID, because there’s less overhead in the MIDI protocol (this is true for OSC as well).

        • Dummy

          Overhead is just one part of the equation. System/bus clocks also affect the latency. So what you said here is wrong Craig.

          • Gábor Szántó

            More precisely, protocol overhead, bus speed/clocks, interrupt priorities, thread priorities in the OS, even handling mechanisms… a lot things not visible in hardware and protocol specs.

          • Craig Reeves

            System/bus clocks effect ALL data, they don’t cherry pick protocols. There’s nothing in HID that keeps it from being equally effected.

    • Craig Reeves

      Your comment is GENERALLY correct, but making 9ms the “magical” number is kind of silly. If the latency value is constant, you’re not likely to notice the delay until you get into the high teens to low twenties. Everyone SAYS they can notice single digit latencies, but that always falls apart when it’s actually tested (putting a delay inline with the headphones).

      Otherwise, You’re spot on with your observations. The latencies quoted in the article are not typical. I think it was pretty clear that they were only used as an example, so it’s not a big deal.

      Oh, and the Serato latency measurement wasn’t some special “in the laboratory” thing, it was achievable using the lowest buffer setting on a decent computer. I and several others all tested the latency of all of the common DVS systems, and typical ranges from 9ms to 24ms were normal. FWIW, the lowest DVS latency achievable at that time was Mixvibes using the RME RPM audio interface, and you could get down to 3ms using the lowest settings. Of course, in ANY of these examples, using the absolute lowest buffer settings isn’t practical for real performance.

      • Gábor Szántó

        We do always a pause/play test, where audio is stopped, then we press a button to play. We measure the time between the sound of the fingernail hitting the button (a click) and the first audio spike (cue set to hard kick). We could never-ever reach 9 ms.

        The magic bullet for 9 ms comes from synth manufacturers. A piano player needs equal or below latency to play professionally.

        • Craig Reeves

          For a professional musician, like a piano player or a drummer, I would tend to agree with you. In fact, Centrance has a great white paper on the subject of latency that I often cite in these kind of discussions. Your perception of latency is greatly effected by the nature of your performance, and in my experience only scratch DJs have shown any ability to perceive latencies that low – and then only when executing fast platter techniques like uzis or swipes. On fader oriented techniques the DJs chance to perceive delay dropped significantly.

    • Tombruton87

      all im saying is a little knowledge is a dangerous thing for this comment and the artical

    • Chris

      Thanks for your reply Gabor. Your explanation does more or less mirror the article, though, except you put more emphasis on USB’s inadequacies as a protocol. Other than around 3ms in total that you propose for that, you’ll notice that the article more or less states the same latency stages and times that you do though – approximately though, of course. Things that aren’t accurate in your write up are the assumption that timecode is quicker because it sidesteps MIDI, a point that is illustrated, albeit basically, in the article, and also the assertions that pure analogue signals give any discernible latency at all – far from 9ms, true analogue systems genuinely don’t impose any latency upon a signal. Again, thanks for your comment.

      • Gábor Szántó

        No, i didn’t put any emphasis on USB’s inadequacies, plz read my comment again.

        My goal was to outline that there are much more factors affecting latency (and jitter!), than easily market-able protocol latencies and buffer sizes.

        Some of those “hidden” latency factors are: external controller’s internal micro-controller or CPU polling/interrupts, the djing software’s event handler loop and audio callback method.

        An example: if you set the buffer size from 5ms to 10ms, it will change the audio callback’s turnaround time, plus the internal OS turnaround time, plus likely some internals in the sound card too. The latency change is not +5ms then, but more.

    • Scott Molinari

      Hmm,

      I always thought a buffer was a component that makes up for inconsistancies in system timing/ data flow. For instance, the computer (with the software) can pump out the audio stream faster or sometimes slower than the audio component can output it. The audio output device, no matter what form it be, needs the data stream at a certain speed, so buffers are there to make up for these speed differences.

      So Gabor saying, “the sound card waiting for the current buffer to end” seems to totally go against what I’ve learned about buffers.

      Chris also says it too.

      “Because it’s not possible to guarantee that there will be a zero percent
      error rate in a signal, the software creates a buffer to ensure that
      any errors can be corrected before they reach the critical stage without
      interrupting the audio stream.

      So, as I see it. The actual buffer isn’t the issue for latency, it is the data feed into it. If it get’s too slow for any reason (or it is full of errors) and the buffer hast to make up for it, then you get latency, but the buffer isn’t the cause for the latency. And, if the buffer runs out, that would be a pop, crackle or even worse, a total loss of sound.

      And for the most part, there are tons of buffers in any computer system. You just don’t get the ability to adjust most of them. But, they certainly are there.

      So, the fact of the matter is, if the data flow is ever decernably slow (in latency), then somewhere in the chain, there is a device or componenet actually needing the buffer to play “catch up” and that device/ component is the culprit. You have to then eradicate that slow component be it the computer and anything in it from RAM, processor, any drivers, like audio or even I/O drivers, or the audio device, bad bus cables or even a bad MIDI controller. If you have a well running system, then buffers shouldn’t really play any real roll in latency as they should be working at a much faster speed than the system can actually create as sound throughput. Otherwise, they would be useless as buffers.
      😉

      BTW, Chris. Great article. We just don’t jive on the buffer part.:)

      • Gábor Szántó

        Generally all buffers are read as fast as possible, yes. But the sound card’s last buffer is the only buffer which must be played at the sample rate, otherwise the audio would sound funny.

    • Scott Molinari

      Hmm,

      I always thought a buffer was a component that makes up for inconsistancies in system timing/ data flow. For instance, the computer (with the software) can pump out the audio stream faster or sometimes slower than the audio component can output it. The audio output device, no matter what form it be, needs the data stream at a certain speed, so buffers are there to make up for these speed differences.

      So Gabor saying, “the sound card waiting for the current buffer to end” seems to totally go against what I’ve learned about buffers.

      Chris also says it too.

      “Because it’s not possible to guarantee that there will be a zero percent
      error rate in a signal, the software creates a buffer to ensure that
      any errors can be corrected before they reach the critical stage without
      interrupting the audio stream.

      So, as I see it. The actual buffer isn’t the issue for latency, it is the data feed into it. If it get’s too slow for any reason (or it is full of errors) and the buffer hast to make up for it, then you get latency, but the buffer isn’t the cause for the latency. And, if the buffer runs out, that would be a pop, crackle or even worse, a total loss of sound.

      And for the most part, there are tons of buffers in any computer system. You just don’t get the ability to adjust most of them. But, they certainly are there.

      So, the fact of the matter is, if the data flow is ever decernably slow (in latency), then somewhere in the chain, there is a device or componenet actually needing the buffer to play “catch up” and that device/ component is the culprit. You have to then eradicate that slow component be it the computer and anything in it from RAM, processor, any drivers, like audio or even I/O drivers, or the audio device, bad bus cables or even a bad MIDI controller. If you have a well running system, then buffers shouldn’t really play any real roll in latency as they should be working at a much faster speed than the system can actually create as sound throughput. Otherwise, they would be useless as buffers.
      😉

      BTW, Chris. Great article. We just don’t jive on the buffer part.:)

    • Scott Molinari

      Hmm,

      I always thought a buffer was a component that makes up for inconsistancies in system timing/ data flow. For instance, the computer (with the software) can pump out the audio stream faster or sometimes slower than the audio component can output it. The audio output device, no matter what form it be, needs the data stream at a certain speed, so buffers are there to make up for these speed differences.

      So Gabor saying, “the sound card waiting for the current buffer to end” seems to totally go against what I’ve learned about buffers.

      Chris also says it too.

      “Because it’s not possible to guarantee that there will be a zero percent
      error rate in a signal, the software creates a buffer to ensure that
      any errors can be corrected before they reach the critical stage without
      interrupting the audio stream.

      So, as I see it. The actual buffer isn’t the issue for latency, it is the data feed into it. If it get’s too slow for any reason (or it is full of errors) and the buffer hast to make up for it, then you get latency, but the buffer isn’t the cause for the latency. And, if the buffer runs out, that would be a pop, crackle or even worse, a total loss of sound.

      And for the most part, there are tons of buffers in any computer system. You just don’t get the ability to adjust most of them. But, they certainly are there.

      So, the fact of the matter is, if the data flow is ever decernably slow (in latency), then somewhere in the chain, there is a device or componenet actually needing the buffer to play “catch up” and that device/ component is the culprit. You have to then eradicate that slow component be it the computer and anything in it from RAM, processor, any drivers, like audio or even I/O drivers, or the audio device, bad bus cables or even a bad MIDI controller. If you have a well running system, then buffers shouldn’t really play any real roll in latency as they should be working at a much faster speed than the system can actually create as sound throughput. Otherwise, they would be useless as buffers.
      😉

      BTW, Chris. Great article. We just don’t jive on the buffer part.:)

    • Per Lindgren

      Hi,Gabor and all others,

      You are right in what you write, gower things does not have to be that bad.

      I have been experimenting with building a custom USB-based fader (based on the focus, optical (binary) fader). The end-end (from optical signal, to actual sound out from the pinky SW, was 6 ms, on a macbook-pro. (Software on the mac was the ms-pinky standalone application). Audio out through NI-DJ-8 interface with lowest buffer setting. Response time was measured using a oscilloscope.

      No snake oil, just pure computer engineering. I’m now in the process of building a an analog usb fader based on the excellent infinium 45 mm fader (also optical, but with a resolution of 128 steps), target is matching maximum poll time for the usb 2.0 spec.

      Notice this does not mean that the vinyl emulation is correct down to 6 m/s, just the fader operation.

      Best
      Per Lindgren,

  • Dj Phat B

    actually i already moved away from using monitor speakers even on big stages. they now are only there as back up. i now do al monitoring on my headphones. at least this way i know i always get the same sound.

  • Max le Daron

    Can you use the metric system in the articles please?

    • Anthony Nicholson

      1 meter = 3.2 feet
      so 1 meter is 3.2ms of latency
      1 yard is 3 feet so 1 yard is 3ms of latency 🙂

      • Anonymous

        Imperial makes no sense.

    • matt

      just use a converter (google is your friend). This is an american website you know

    • marviedisku

      what he meant by MS is millisecond ( i could be wrong but i doubt )

    • Tom

      I did not miss the decimal system, here. Even though I have to convert internally, I found the ft/ms analogy quite helpful. 1m = 3ms is not as easy to memorize,

    • tay740

      google is pretty good at the metric system

  • Smegma-Head

    This is why IMO scratching will always be superior, when done on a turntable with real vinyl compared to DVS/Turntable or DVS/Midi-Controller.

      • Santos

        What Kyle Said.

      • Smegma-Head

        The term cue-point-juggling is an insult to beatjuggling. I’d rather call it midi-button-bashing (people like DJ Shadow were doing stuff like this for a long time on their MPC)

  • Boxbomber

    these are all really small amounts of time. In a test I took two 128bpm tracks and let them play synchronized. After that I moved the second track away from the first one until I could hear them not being synchronized anymore. The test showed that 70ms (offset between the tracks) was not recognisable by me.
    So, I think we don’t have to bother with 10ms latency in our setups.

    Cheers

      • Sambo

        I’m with Emil, I just did the exact same test and the beats where only 16 ms out before I heard the galloping echo. Bad times.

  • Maximus Moretta

    Good Article!

  • Payload

    Great info cheers

  • 16b441khz

    speed of sound is easier to remember at 344 m/s at 21 deg Celsius, explaining to be at room temperature is a bit subjective. Whats room temperature to one person is different to another. otherwise the article was cool. maybe something to write up about in the future, is it worth playing audio at higher sample rates than 44.1khz or should this be left for just recording?

    • Will Marshall

      Room temperature is an official standard (seriously), and is defined as 21 C.

      It’s used in chemistry, physics etc.

  • Maxwell Raeburn

    Thanks aswell! Very helpful.

  • Maxwell Raeburn

    FIRST!

    • SynthEtiX

      lol fail