We've been dealing with data cables, in one sense or another, for a long while here at Blue Jeans Cable -- HDMI is nothing but a data cable with show-biz pretensions, after all -- but only more recently did we really taken a look at, and become involved seriously in, the world of Ethernet cabling. Like many people, we had long assumed that while Ethernet cable quality wasn't irrelevant, it wasn't a big issue -- that the cable on the market was doing what it was supposed to. A bit of performance testing on consumer-market Cat 6 and 6a cabling changed our minds on that, and we decided to bring to the consumer Ethernet cable market what we have brought in audio and video: quality American-built products that actually do what they say.
What we've learned about cable has been interesting; it's harder to make compliant Cat 6a patch cables than one might think, for example, and for reasons that are less than obvious. But what we've learned about the marketplace is interesting, too. We've learned that there are a lot of people who don't believe that the quality of a data cable really can matter in real-world terms. They see the tests we've published, showing that the consumer market is flush with noncompliant products that fail the applicable specifications, and they smell a rat. They think that this is some sort of a snow job, despite the fact that we're simply using industry-accepted test gear to test these products against the minimum specifications published by those -- TIA and ISO -- whose job it is to write the specs that will make a network function properly.
We get that. We sympathize with it, even. For too long the consumer market for audio and video cabling has been the province of people who are willing to tell tall tales just to sell five dollars worth of wire for five hundred dollars, or five thousand. And when we tell people that the five-dollar Ethernet patch cord from a price-driven online vendor is badly noncompliant, but that the fifteen-dollar one we make will do the trick, they suppose that the same scam -- at a less steep price gradient -- is at work.
There is a good deal to say on this subject -- data networking and transmission line theory are bottomless topics -- but today we're going to start by talking about one of the most common misconceptions we encounter when talking about these issues with people, and that's what one might call the "digital fallacy." The digital fallacy assumes that sending information digitally over wires is easy, at any speed. The rallying cry of this fallacy: "remember, it's all just ones and zeros."
Now, the statement that "it's all just ones and zeros" has legitimate applications. When someone says that one S/PDIF cable sounds richer than others, while another has boomy bass, we'll be the first ones in the room to say that it's a digital signal and that we know of no possible way that such a thing could happen. Indeed, in just about any situation where one can assume that two cables are equally capable of getting the data from point A to point B such that all the ones arrive as ones and all the zeros arrive as zeros, it's fair to say that it truly is all ones and zeros, and that the manner in which the ones and zeros got through the cable isn't of great interest.
But are they ones and zeros? And do they really always get there? There's more to this than the digital fallacy would suggest.
It is reasonable enough to suppose that digital signaling is simple. Morse code, as sent by a ham radio operator, is a simple digital on-off encoding scheme in which the lengths and spacing of the "on" events contain the information. If we find a good operator to send Morse and a good operator to receive, we can get a decent sort of approximately-typing-speed communication going. Depending on how one characterizes the data rate, this comes to a speed of perhaps a few hundred baud. Assuming good conditions of reception, and good equipment, and operators who are proficient in copying code, it's simple and easy. There's always either a tone, or not a tone -- no in betweens, no gradual blending from toneless to toned, no multiple levels of tone to account for. And that is the way we tend to think it is to send ones and zeros; the joy of digital, after all, is that while analog information, together with analog signals, decay in an analog fashion, digital information doesn't decay in an analog fashion -- as the signal decays, there is no loss of information at all up to a point, and then there is catastrophic loss. But it's easy to assume, without knowing, that the conditions for catastrophic loss are rare, and in this we are liable to be victims of the natural prejudices people have by virtue of being medium-sized and medium-speed.
What do I mean by that? Well, it's been pointed out that some things are hard for people to understand because we don't live long enough to see them (e.g., evolution by natural selection); other things are hard to understand because they happen at size scales far below our thresholds of perception (e.g., quantum mechanics); other things are hard to understand because their effects are seen at speeds which we cannot attain or easily observe (e.g., relativity). Don't worry -- we're not going to get all woo-woo here and go off and try to make a bogus case from quantum mechanics, which we've seen plenty of people do while trying to sell cabling or to promote some pseudoscientific point of view. The point, simply, is that because we are the size and the speed we are, we're not very good at imagining how things happen that happen very fast. And data transmission down network cable -- well -- that happens VERY fast. Electricity travels through a wire insulated with solid polyethylene (a common dielectric in Ethernet cables) at about two-thirds the speed of light.
Because we are the size and speed that we are, we see electricity as something invisible which operates instantaneously. Turn on the light switch to an incandescent bulb, and the light is on. You're not aware of the delay between the time the switch makes contact and the time the energy arrives at the bulb; you're not even, in most cases, aware of the much longer time between that and the time the filament warms up sufficiently to produce light. And why should you be? For purposes of looking into a previously dark room, nanoseconds don't matter; nobody has THOSE kinds of neural response times or reflexes.
But, in fact, what happens when you throw that switch is complicated. To really see it, in all its complex glory, you need some pricey test gear that can tick off the nanoseconds, or picoseconds even, and measure what's going on; nobody really needs to see it in the case of the lamp, though, because it's irrelevant to the function of the lightbulb. Now, let's suppose that you want to use your light switch to signal the next door neighbor in Morse code. At five words a minute, no problem. But what would happen -- apart from a real headache, and a sore light-switch-throwing finger -- if you tried to run Morse to your next door neighbor at five million words a minute? All of a sudden, what happens to electricity at the nanosecond or picosecond level -- far below your perceptual threshold for what you consider "instantaneous" -- matters.
How? Let's go into more detail.
Just as they say of politics, all physics is local. Electrons and fields don't generally respond to things far away -- they respond to their immediate circumstances, and energy propagates at speeds which, though very fast, are finite. When you throw a light switch, for lighting purposes it's sufficient to suppose that all of the electricity in the wire begins to flow all at the same time, but that's not really what happens. A wave of energy begins to propagate down the wire when the switch is turned on; it takes time for that wave to travel; and the wave changes its characteristics as it moves.
As the wave travels, it in no sense "knows" what it will find. Down the line are characteristics of the circuit that will affect how the energy of the wave is absorbed -- capacitance will store some of the charge, impeding changes in voltage, while inductance will impede changes in its flow and resistance will convert it to heat -- but the wave starts traveling down the wire without any reaction to any of that. It'll react -- soon, because electricity in a solid PE dielectric is still traveling at 2/3 the speed of light -- but the interplay between what happens where the signal enters the wire and what happens to the flows afterwards becomes surprisingly complex. The particular combination of inductance and capacitance in the cable gives it its "characteristic impedance," and if it doesn't match the impedance of the source and load circuits, and/or is inconsistent from point to point along the cable, strange things happen. In practice, all systems have SOME amount of impedance mismatch, but how MUCH impedance mismatch becomes a very important factor. What controls capacitance and inductance, and hence characteristic impedance? It's purely a matter of manufacturing quality -- consistent, accurate control over dimensions and materials, in a domain where tiny differences matter.
The capacitance and the inductance affect the travel of voltage and current, and if the voltage and the current are being affected at different rates, it upsets the phase relationship between them. This will cause some of the signal's energy to find it easier to discharge by reversing course and going back the way it came--and it can turn on a dime. We call this a "reflection," and it happens whenever the characteristic impedance of the line changes. The more consistent the characteristic impedance -- and the more well-matched the characteristic impedance of the line is with respect to the signal's source and the load it drives -- the smaller the reflections will be.
Now, what does that reflection do? As it travels back down the line it interferes with the original signal, cancelling or augmenting parts of it. The "ones and zeros" (well, often not ones and zeros -- we'll get to that in a bit) have the noise of other ones and zeros overlaid upon them. We call this phenomenon "return loss," and it's the same sort of thing you've seen if you're a radio operator examining an impedance mismatch in a transmitting line and antenna (in which case you probably think of it in terms of "SWR," or "standing-wave ratio").
All of the energy in a current flow isn't in or on the wire; any flow of current causes an energy field to be formed in the space surrounding the flow, and we call this phenomenon "inductance." Any time a current generates a field around it, this field has the ability to induce flow in a neighboring conductor, and this is the cause of the phenomenon we call "crosstalk." The term "crosstalk" comes from the telephone world, where we are all familiar with the problem of faint voices in the background of our conversations, caused by the proximity of individual pairs of wires, each carrying a call, to one another in large telephone-company cables.
Every data pair in a cable causes some amount of crosstalk in the other pairs. The pairs are given different twist rates to control this phenomenon (this makes the physical presentation of the different pairs to one another inconsistent over distance, suppressing crosstalk), but while crosstalk can be controlled it cannot be eliminated. Meanwhile, if there are cables run together in a group, e.g., in a cable tray, each cable can cause crosstalk in its neighbors (termed "alien crosstalk"). Once again, just as with return loss, what we have is more faint echoes of other data being added to the datastream in our pair. The fact that the noise coming from return loss and from crosstalk runs at the same data rate as the intended signal means that there's no easy way to get rid of any of this -- we can't filter it out.
Most of the noise-rejection properties of paired data cable come from the use of balanced, "differential" signaling, which permits common-mode noise rejection. But differential signaling itself introduces one increasing problem as our data rate increases.
Differential signaling works by sending a "plus" and a "minus" side of the same signal down the members of a pair. The notion is that these two sides of the signal arrive in time with one another, and as the receiving circuit takes the difference between them rather than the voltage on either relative to ground, any noise which affects both sides of the signal in "common mode" -- that is, which increases or decreases the voltage on both sides of the pair -- effectively disappears. A critical part of this, of course, is that the signal on both sides must arrive at the same time. But in fact, the two wires in a data pair are never exactly the same length; there is always some difference in electrical length, either due to actual differences in physical length or due to inconsistencies in dielectric material or application. This means that the two sides do not arrive quite in time with each other, and to the extent they are out of time with one another, this smears any transition in voltage over time, with half of it arriving earlier than the other half. Here again, manufacturing consistentcy is the key to cable quality.
Some data signalling systems do use a straight binary encoding scheme; HDMI is one of these. In such a system, the signal alternates between two voltage levels, with one voltage being a "one" and the other being a "zero." The original signal, as generated at the source, while not quite a "square wave," where transitions between voltages are instantaneous, nonetheless has very sharp, rapid transitions between these voltage states. Now, between return loss, the effects of capacitance, attenuation, skew and crosstalk, it should be remembered that two things have happened to the signal by the time it reaches the destination: it's lost some strength, so that the voltages read are less different from one another than they were at the source, and it's gotten smeared over time a bit and overlaid with spurious signal from crosstalk and return loss, so that the transitions betwen these voltages are more gradual than they were at the source. Instead of "ones and zeros," looking at the signal in analog terms we might have a lot of "one-quarters" and "three-quarters," with an increased amount of time in the transitions where the signal is somewhere between. Meanwhile, the points in time where these transitions occur can be shifted somewhat by the signal content -- after a string of "ones," the voltage reaches a higher point at output than it does after a single "one," due to the time-smearing effects of capacitance and return loss, and this means that when the next "zero" comes, the voltage will start higher and take more time to cross the midpoint. Reading this stream is, accordingly, trickier than it sounds.
Here, to give you some idea of what a signal looks like after some travel through cable, is an "eye-pattern" chart from an HDMI cable. This pattern is generated by overlaying multiple passes of a pseudorandom bit sequence upon one another, so that instead of seeing one "bit" you're seeing many bits laid on top of one another, represented by the colored traces. The horizontal axis is time, and the vertical axis is voltage. This eye pattern depicts a signal which is not terribly degraded, so that the difference between a "one" and a "zero" is still easy to read if your clock samples the bit at the right point in time -- but the values shown vary enormously for both ones and zeros,and as the effects of return loss and crosstalk and attenuation pile up (by making the cable longer, or of lower quality) this becomes more difficult, and eventually impossible, to do. As you can see, even in this very clean "eye," the value representing a "one" or a "zero" is quite variable, and the transition between them is not sharp and clean, but has a considerable (and variable) slope time.
In fact, while the common claim is that Ethernet is "all ones and zeros," the various common Ethernet protocols are NOT simple binary encoding schemes. The most extreme example is 10GBaseT, which is encoded in a sixteen-level system called PAM-16. These multilevel coding structures are used because they enable higher data rates for a given bandwidth--but the cost of doing this is that one must distinguish accurately between many different voltage levels. The waveform generated, instead of looking like an on/off switch between two voltage levels, now looks rather like an analog waveform.
And it's here, with multilevel encoding, that return loss and crosstalk and intrapair skew really start to become critical. The receiving circuit is trying to time events that happen millions of times per second, and is trying to measure the amplitude of the signal at those events. Even if distinguishing a "one" from a "zero" sounds easy, distinguishing a 5/8ths from an 11/16ths is quite a different matter. Return loss and crosstalk that are tens of dBs down from the signal level -- and which one might therefore suppose to be too small to matter -- can make these tasks more difficult, or even impossible.
As we've said, we have no quarrel at all with the idea that if the data all get through the cable in good order, without dropped packets, "better" cable quality will make no difference to the performance of a network. But the "if" in that statement is very easy to overlook, because of the deceptive simplicity of the "all ones and zeros" mantra. Correct transmission of digital data is important, and it's not without problems. Every last one of those problems, at the cable level, is a function of quality control: better dimensional control makes for better impedance consistency, which means less return loss; consistent and well-designed twist rates and pair spacing mean less crosstalk; consistent dielectric materials and pair-twisting mean less intrapair skew; and careful termination with well-designed connectors means less disruption of all of these characteristics of well-made bulk cable. It is upon these kinds of considerations that that all-important "if" turns.
How, then, to tell if cable is good enough? Here, we turn to the specifications. There are, indeed, specifications for network cabling written by TIA and ISO -- not by smooth-talking cable vendors but by the same people who write the specs which govern the active parts of the network in which the cable will be employed. If cable quality didn't matter, there'd be one ancient spec document for Cat 3 cabling, compliance with which guaranteed good performance into the multi-Gigabit range. But, in fact, the specifications have gotten tighter as speeds have run higher. Cat 5, then 5e, then 6, then 6a -- the cable needs to exhibit better return loss and crosstalk performance, over a broader frequency spectrum, as one climbs the hierarchy of the specifications. Without spec compliance, the risk of poor performance increases dramatically.
Can cable below spec give good results? Yes, of course, but this is going to depend. Some factors upon which it depends are hard to predict -- how well the receiving circuit can reconstitute a signal, how good the quality of the signal leaving the sending circuits is, how long and how broken-up (by plug/jack interfaces) the cable runs are, and the like. Other factors have to do with the particular use: streaming audio and video reacts very poorly to even small amounts of latency, which means even a small number of dropped-and-resent packets will be noticeable. And, of course, it depends upon just how not-up-to-spec the cable is; a system that tolerates a failure of 1 dB may not tolerate a failure of 10 dB. Spec-compliant cable, on the other hand, removes this uncertainty -- if the cable is compliant, and the devices are compliant, it's pretty much all smooth sailing.
On spec compliance, one last note: it's very important not to assume specification compliance based upon jacket labeling. We've seen a lot of non-compliant cable on the market -- some cables we've found that are sold as Cat 6a (required to meet tough standards up to 500 MHz) don't actually pass the specification at Cat 5e (required to meet much looser standards, and only up to 100 MHz). We've seen that with popular online cable vendors, with venerable consumer electronics brands, with familiar brands sold in office supply stores and electronics stores -- everywhere in the consumer market, the rule is that one cannot assume that a cable complies with the spec printed on the jacket without running a proper certification test (which must be a patch cord test, not a "channel" test, in the case of a patch cord!) on that specific cable.