NVMe, or Non-Volatile Memory Express, is a streamlined protocol specifically designed for flash memory. Due to a lightweight protocol, the storage controller can be greatly simplified, relative to a legacy SCSI (Small Computer Systems Interface) style storage controller, and thus latency is reduced and performance optimized. NVMe also leverages the widely-adopted PCIe (Peripheral Component Interconnect Express) interface as the physical transport mechanism. This combination is what makes NVMe protocol so attractive.
PCIe, however, has its own limitations, especially when trying to create large pools of flash storage. While storage array nodes can be connected using external PCIe, it’s not a scalable solution.
NVMe-oF extends the capability of NVMe by allowing multiple storage array nodes to be connected over a fabric. Connecting over a fabric provides many benefits, such as redundant connections, traffic management, and the creation of very large pools of storage.
Of course, creating large pools of storage is not new. Fibre Channel and SAS have done this quite effectively for many years. What is new is the ability to create large pools of flash storage using the streamlined NVMe protocol.
Just as NVMe is used as a protocol over PCIe within a server or storage array, NVMe is used as a protocol over the fabric interface between storage arrays. The primary fabric technologies getting traction today are RoCE (RDMA over Converged Ethernet) and Fibre Channel. When implementing NVMe-oF solutions, interoperability and conformance to NVMe standards is important to bring these technologies to maturity and market faster. All the components for building an NVMe-oF solution, including drive enclosures, host bus adapters (HBA), switches, and the internal NVMe storage devices need to be tested for interoperability.
There is an ongoing effort to bring some SCSI-like services and management to NVMe while maintaining the relatively lightweight protocol that has enabled these high-performance, low latency drives. Striking a balance in deployment of these services will be key to keeping NVMe speedy and lightweight.
NVMe has been very successful in the PC space. The introduction of NVMe-oF will accelerate the adoption of NVMe in the data center.
About the Author
David Woolf is the Senior Engineer, Datacenter Technologies at the University of New Hampshire InterOperability Laboratory (UNH-IOL). He has developed dozens of industry-reviewed test procedures and implementations as part of the team that has grown the UNH-IOL into a world-class center for interoperability and conformance testing. David has also helped to organize numerous industry interoperability test events at both at the UNH-IOL facility and off-site locations. He has been an active participant in a number industry forums and committees addressing conformance and interoperability, including the SAS Plugfest Committee, SATA-IO Logo Workgroup, co-chair of the MIPI Alliance Testing Workgroup, and coordinating the NVMe Integrators List and Plugfests.