The industry appears to be transitioning from 10x10G to a more efficient 4X25G. The first standard for 25Gb/s signaling was the OIF-CEI with VSR, SR and LR standards. What I have been looking at recently are the Ethernet 802.3bm 100GBASE-KR4 backplane standard and the Ethernet interconnect standard 802.3bj.
To achieve 100G there is CAUI10, which is 10 lanes of 10G = 100G, while CAUI4 is 4 lanes of 25G = 100G.
100GBASE-CR4 is 4 lanes of 25G using CAUI4 = 100G throughput.
All three standards below (from 100 Gb/s Backplane and Copper Cable Task Force) are intended to allow designers to drive 100G of data over either backplanes or passive copper cable DAC (Direct Attach Copper).
100GBASE-CR4: 100 Gb/s transmission using 100GBASE-R encoding and Clause 91 RS-FEC over four lanes of shielded balanced copper cabling, with reach up to at least 5 m
100GBASE-KP4: 100 Gb/s transmission using 100GBASE-R encoding, Clause 91 RS-FEC, and 4-level pulse amplitude modulation over four lanes of an electrical back-plane, with a total insertion loss of up to 33 dB at 7 GHz
100GBASE-KR4: 100 Gb/s transmission using 100GBASE-R encoding, Clause 91 RS-FEC, and 2-level pulse amplitude modulation over four lanes of an electrical back-plane, with a total insertion loss of up to 35 dB at 12.9 GHz
100GBASE-CR4 is 4 lanes of 25G using DAC cable up to 5 Meters (target 16.5dB). After playing for years in the high speed backplane designs, this new standard has been my focus recently. 5 Meters of DAC is a relatively decent distance if you need to connect switches in a stack or even between stacks to adjacent rows of switches. (Reference: Facebook Gives Lessons In Network-Datacenter Design)
100GBASE-KP4 is 4 lanes of 25G using PAM4 modulation, which allows a designer to drive backplane signals with a 33dB at 7GHZ. This standard effectivly uses 14G signalling using multilevel modulation over long backplanes (including vias, connectors, and PCB etch).
100GBASE-KR4 is 4 lanes of 25G using PAM2 modulation to drive backplane signals with up to an insertion loss of 35dB at 12.9Ghz. This is effectivley driving 4 lanes of 25G signalling on relatively long backlane traces that include connectors, vias and PCB etch.
In the recent past we enabled FEC (Forward Error Correcting) inside our chassis to buy design margin in the SERDES backplane channels, often as much as an order of magnitude of BER margin. This is particulally handy when trying to push multiple generations of blades into chassis that are already in the field. I have recently been learning about FEC in the IEEE standards and have just started to scratch the surface. In IEE802.3bj Clause 91 RS-FEC, Reed Solomon is very high latency, while Clause 74 latency is significantly lower. When no FEC is used you just have time-of-flight latency (of the interconnect) and no latency added on error correction. Time-of-flight latency is pure physics and cannot be worked around.
In applications where latency is critical, the end user would like to avoid using FEC, but if switches are far enough apart it may be required. Below is a table that is available from IEEE802.3 25Gb/s Ethernet study group that helps to summarize the length advantage when enabling FEC versus the latency penalty. There are some customers who consider the 250ns for latency for the Clause 91 RS-FEC to be unacceptable. But depending how much margin may be in the PCB design, some engineers would suggest that there is too much guardband in the IEEE specification to handle some of the older package technologies, so using thicker conductors in the DAC passive cable (26 AWG) with moderate or no FEC, 5M cable lengths for passive DAC may be achievable. I hope to attend the next Plug fest at UNH IOL and gain more insight!
Another interesting table that helps to clean up some of the confusion is the nomenclature versus clause correlation table, showing what is optional and what is mandatory. This is also available from the working group presentations and shows for the 100GBASE-CR4 Clause 91 RS-FEC as being mandatory for the lengths that we would like to support (5 Meters).
After beginning to understand some of the specifications, I started digging into new designs and testing on a 100G port switch. Running eye scans using the internal MAC software, one can optimize the TX pre-emphasis and main, or you can choose to utilize the auto-tuning features often available in the newest ASICS. By looping one port back into another with a short loopback DAC, I can avoid the test fixtures that are not always available to drive directly into my oscilloscope. In the past, I have utilized a Wilder test fixture to break out the 10G and 40G SFP+/QSFP+ to drive directly into my scope and optimize TX main and pre-emphasis by intelligently tweaking the TX driver settings. More recenty, we have been relying on the eyescan features of the MAC to help perform this task. Below is a sample eye diagram from an ASIC eyescan report.
After I accumulate and verify models into my network/channel simulation tools (i.e. SiSoft’s QCD), I import the physical design. Then I generate masks from the IEEE specs and run post layout simulations on every high speed channel on my PCB, making sure to have critical stackup electrical parameters (Df, Dk, CU roughness) defined and previously verified. I can run hundreds of simulations quickly and examine worst case networks and optimize in the layout or verify they are adaquate. I also make sure to properly define VIA stubs and insure the layout meets all of the semiconductor vendors’ design rules.
Robert Haller, senior principal hardware engineer for Extreme Networks, is working on next generation Ethernet switching solutions, is the corporate Signal and Power Integrity lead and has been a member of the DesignCon Technical Program Committee for 16 years.