Now that processors and motherboards in server-class PC workstations have become so powerful, they are contending for some of the most demanding real-time embedded applications. But integrating these cost-effective platforms with the open architecture board-level products required for mainstream government and military applications poses significant challenges for system designers. One critical need is for adequate cooling of these power-hungry boards to assure reliable system operation.
Server-class workstations offer the benefits of high-end processing, fast data links, efficient power management, deep and fast memory, low latency, and capable network interfaces -- all in a larger desktop or rack-mount chassis. Powerful CPUs like Intel's multi-core Xeon or Core i7 processors are tightly coupled to advanced chipsets for PCIe expansion slots, multi-channel DDR3 memory, and 1-Gbit Ethernet network connections. These key factors offer specific benefits to real-time applications, where high-rate, continuous, sustained data transfers between system components must be guaranteed.
Driven by the extremely large markets for enterprise computing, Web and file servers, and cloud computing, prices for these systems are extremely competitive, compared to alternative embedded-system architectures like VME, cPCI, and VPX. However, unlike these more traditional chassis, PC servers fail to provide the forced air or conduction cooling facilities necessary to remove the 40 W to 80 W of power typically consumed by boards containing DSPs, FPGAs, and analog-to-digital and digital-to-analog converters, all essential to many real-time embedded systems.
Instead, server motherboards rely on custom heat-sink assemblies often coupled through heat pipes to active-cooling finned radiators to remove heat from the CPU and chipsets. Hot-swap disk drives mounted in front-panel accessible bays are cooled by mid-chassis fans that pull air in from the front. Power supplies contain one or more thermostatically controlled fans to maintain safe operating temperature limits. Two or more rear-panel case fans help to evacuate hot air from inside the chassis.
But the site for expansion card slots in the left rear quadrant of a server chassis has no standard provision for cooling. Furthermore, the rear PC panels of the slot cards act as barriers, blocking air from the mid-chassis fans that might otherwise flow across the card surfaces. Indeed, this region of the server chassis acts as a closed box, trapping heat and causing unsafe temperatures for components on the cards.
Cooling by design
One of the first steps in evaluating any cooling solution is accurate measurement of its effectiveness. Motherboards often include extensive temperature monitoring facilities of the CPU and other critical components as part of the BIOS and system driver software. Disk drives report temperatures through SATA and RAID controller software utilities.
Likewise, embedded board vendors should incorporate several temperature sensors at key locations around the board to ensure that no hot spot is missed. Also, many high-dissipation devices like DSPs and FPGAs include junction temperature sensors as part of the silicon die. Monitoring these chip sensors is essential to ensure sufficient cooling for maintaining junction temperatures within the manufacturer's limits.
As an example, Figure 1 shows the Pentek 71760 quad 200-MHz, 16-bit A/D card with nine temperature sensors, including junction sensors for the Virtex-7 FPGA and