High-Speed Interconnect Solutions Evolve for Data CentersHigh-Speed Interconnect Solutions Evolve for Data Centers

At DesignCon 2025 Amphenol highlighted interconnect solutions spanning both short-reach millimeters to long-range kilometers.

Daphne Allen, Editor-in-Chief

February 1, 2025

5 Min Read
Sam Kocsis, director of standards and technology at Amphenol, spoke at DesignCon 2025
Sam Kocsis, director of standards and technology at Amphenol, spoke at DesignCon 2025.Daphne Allen/Informa Markets Engineering

Data centers are moving a lot of signals to store and process data and support AI analysis. These centers require solutions to support high-speed signal traffic and protect the integrity of that data.

At DesignCon 2025, Sam Kocsis, director of standards and technology at Amphenol, explained several new interconnect solutions for promoting performance, reliability, and efficiency necessary for next generation data center deployments. Kocsis is active in IEEE 802.3, OIF, and OCP projects, and is currently a co-chair of the OSFP MSA and chairman of the PCI-SIG Cabling Workgroup. 

During his presentation, Leading the Evolution of High-Speed Interconnect, Kocsis explained innovative solutions for interconnects spanning both short-reach millimeters to long-range kilometers.

For instance, cable backplane solutions are becoming more popular. “We have a long-standing history of backplane connectors providing dense, high-speed connectivity within the rack through our mainstays of XCede, ExaMAX, and Paladin connector families. And in the last year and a half, we have seen those types of interfaces migrate to cable interfaces, and the cable backplane has become one of the leading product lines for our High-Speed Products Group,” said Kocsis.

Amphenol offers interconnect solutions using passive copper and active copper designs. Using near-package copper and co-packaged copper not only extends the reach of passive copper interfaces, but also provides potential migration to co-packaged optics, Kocsis told the audience. “We're finding that the deployments to have both intra-rack and inter-rack copper connectivity require a little bit more than passive copper can achieve at these data rates,” he said. “And in the last two years or so, we have made significant advancements with our organic growth in optics markets supporting leading edge deployments in different form factors. We see the landscape for 1.6 T to be very diverse, and we are developing a range of linear pluggable optics (LPO) and retimed solutions to support market demands. It is exciting for us to not only have the success we have had in copper solutions but also show that we have a path to support applications in the optics space as well.”

Related:High Tech Taking Over Super Bowl Ads? Not So Fast!

Data system architectures are also evolving, Kocsis explained. “Traditional servers are becoming disaggregated into their core network elements, providing the opportunity for more interconnects to provide new paths to make those connections. We are seeing appliances with purpose-built compute nodes, memory nodes, and accelerator nodes be disaggregated into different positions within a rack, opening the opportunity for more connectivity.”

Related:DeepSeek Splashes Onto the Tech Market

Amphenol has responded with backplane solutions in different configurations (orthogonal, right-angle, vertical mates) to provide connectivity as the architectures and the platforms become more complex, he added. “We are looking at connectivity not only from the top of the rack to the rack to the bottom of the rack but also looking at connections to adjacent racks through active copper solutions. We're exploring different technologies to make the passive copper solutions achieve longer reach with higher signal performance, and more density.

“This is really going to be the year where 224 G starts to ramp, and we see wide deployment across our customer base,” he added. “We've got an arsenal of solutions that are industry leaders in the backplane, mezzanine, and the IO space, all working together. And it all starts with the twinax cable that we have assembled to get the most bandwidth, and the flexibility to support connectivity to all of these different types of interfaces.”

To support AI deployments, “socketed memory is becoming an attractive solution to get more performance,” said Kocsis. “There are signal integrity benefits of using the socketed memory solution over the card-edge DIMM-style interface. We have partnered with Dell to bring this solution to market and create a new JEDEC standard.”

Related:Signal and Power Integrity Solutions in Abundance at DesignCon

To promote efficiency in data centers, Amphenol has developed solutions to support alternative cooling mechanisms. “Historically, everything has been air-cooled, and we have been focusing on the limitations of the modules in an air-cooled environment. We are now seeing an overwhelming trend to move to liquid cooling. We are exploring other options like immersion cooling as well, but liquid cooling  is currently having its moment.”

Amphenol offers technology to support liquid cooling as well as a test and evaluation platform. For instance, rather than provide external sensors for use under cabling to detect liquid leaks, Amphenol has developed cables that serve as sensors and can alert technicians to potential shorts. 

“We've got a number of different sensor technologies that we're deploying as liquid cooling gets deployed into data centers,” he said. “We're seeing a need to be able to monitor and support some of the things that could go wrong with diagnostics and be able to control, monitor and send feedback to system controllers and management interfaces.”

Migration from PCIe Gen 6 to PCIe Gen 7 is also in development. “We've been promoting our Mini Cool Edge IO (MCIO and SFF-TA-1016) solution for the past number of generations. We feel confident that it can be used to support most Gen 7 deployments with minimum adaptations.” The form factor can be modified in a way that does not break backwards compatibility and while enabling forward compatibility to the extended bandwidth, he added.

“There's a number of people looking to advance PCIe for optical solutions,” he continued. “PCIe has historically been limited to one meter internally with these types of cables, two meters with external IO cables. And if we are trying to create some type of compute fabric for AI scale-up applications with PCIe protocol, two meters is likely not going to be all that the end users are going to want. So, there is a lot of push to support optics and with new form factors.” In their booth at DesignCon, Amphenol displayed a variety of different cable lengths that can support 128 Gb/s. 

About the Author

Daphne Allen

Editor-in-Chief, Design News

Daphne Allen is editor-in-chief of Design News. She previously served as editor-in-chief of MD+DI and of Pharmaceutical & Medical Packaging News and also served as an editor for Packaging Digest. Daphne has covered design, manufacturing, materials, packaging, labeling, and regulatory issues for more than 20 years. She has also presented on these topics in several webinars and conferences, most recently discussing design and engineering trends at MD&M West 2024 and leading an Industry ShopTalk discussion during the show on artificial intelligence. She will be moderating the upcoming webinar, Best Practices in Medical Device Engineering and will be leading an Automation Tour at Advanced Manufacturing Minneapolis. She will also be attending DesignCon and MD&M West 2025.

Daphne has previously participated in meetings of the IoPP Medical Device Packaging Technical Committee and served as a judge in awards programs held by The Tube Council and the Healthcare Compliance Packaging Council. She also received the Bert Moore Excellence in Journalism Award in the AIM Awards in 2012.

Follow Daphne on X at @daphneallen and reach her at [email protected].

Sign up for Design News newsletters

You May Also Like