By Mike Klempa, Ethernet and Storage Technical Manager, University of New Hampshire InterOperability Lab

Hyperscale data centers have turned the process of building a data center on its head through sheer demand and purchasing power. Each one of the hundreds of data centers buys parts for hundreds of thousands of servers, which is a volume that gives hyperscale data centers the ability to work directly with the manufacturers as they strive to create an increasingly efficient solution. Rather than trying to figure out how to fit off-the-shelf hardware into their design, hyperscale data center companies are designing the hardware themselves in order to meet their own demanding specifications and needs. Most designs are open source through the Open Compute Project (OCP), which allows them to be handed off to manufacturers to build with confidence, knowing there’s a healthy market for the product. OCP, in conjunction with the University of New Hampshire InterOperability Lab (UNH-IOL), has a project underway to integrate and validate new form factors in addition to PCIe in servers, addressing data rates up to 32GB per lane and pertinent, but previously, unmet, features at the same time.

A main concern of hyperscale data centers is heat. The effects heat has on silicon and overall server performance, along with the cost it takes to offset them, are daunting. The OCP NIC has dedicated lanes on to communicate via NC-SI sideband, which allows out-of-band management to further unify and control a whole data center. The ability to communicate with any NIC in a data center allows for the control of traffic as well as power management. OCP NICs were also designed to have larger heat sinks to dissipate more heat. This should create more efficient hyperscale data centers because the level of airflow can be determined and applied more specifically based on the information acquired from the NC-SI sideband. At a time where everyone wants to be conscious of energy consumption, this is a selling point. The integration of the NC-SI communication should result in efficient data centers in terms of thermal control and autonomy.

The OCP NIC 3.0 connector and specification design began years ago and has evolved into a form factor that will likely be ubiquitous in the data center. This is mainly attributed to the forward thinking of the OCP NIC 3.0 contributors such as Amphenol, Broadcom, Dell, Facebook, HPE, Intel, Lenovo, Mellanox, Microsoft and others. These contributors were focused on creating a standard that not only met their needs today but also left a path for future upgrades. The wide variety of contributors, along with the openness of the OCP community, means there is vast expertise within every aspect of the server and NIC design. The future of servers in the data center is highly tuned for mass deployment, optimization and automation, and should be something to keep an eye out for as data centers and data rates continue to grow.

About the Author

Michael Klempa is the Ethernet and Storage Technical Manager covering SAS, SATA, PCIe, and Ethernet Testing Services at the University of New Hampshire InterOperability Laboratory (UNH-IOL). He obtained his Bachelors and Masters in Electrical Engineering at the University of New Hampshire.