what is NVMe and why is it important blog
To the Core

What is NVMe™ and why is it important? A Technical Guide

NVMe (Non-Volatile Memory Express) is a new protocol for accessing high-speed storage media that brings many advantages compared to legacy protocols. But what is NVMe and why is it important for data-driven businesses?

As businesses contend with the perpetual growth of data, they need to rethink how data is captured, preserved, accessed and transformed. Performance, economics and endurance of data at scale is paramount. NVMe is having a great impact on businesses and what they can do with data, particularly Fast Data for real-time analytics and emerging technologies.

In this blog post I’ll explain what NVMe is and share a deep technical dive into how the storage architecture works. Upcoming blogs will cover what features and benefits it brings businesses and use cases where it’s being deployed today and how customers take advantage of Western Digital’s NVMe SSDs, platforms and fully featured flash storage systems for everything from IoT Edge applications to personal gaming.

My work has been associated with data storage protocols, in some way or the other, for more than a decade. I have worked on enterprise PCIe SSD product management and long-term storage technology strategy, watching the evolution of storage devices from up-close. I am incredibly excited about the transformation NVMe is bringing to data centers, and the unique capability of Western Digital to deliver innovation up and down the stack. NVMe is opening a new world of possibilities by letting you do more with data! Here’s why:

The Evolution of NVMe

The first flash-based SSDs leveraged legacy SATA/SAS physical interfaces, protocols, and form factors to minimize changes in the existing hard drive (HDD)-based enterprise server/ storage systems. However, none of these interfaces and protocols were designed for high-speed storage media (i.e. NAND and/ or persistent memory). Because of the interface speed, performance of the new storage media, and proximity to the CPU, PCI Express (PCIe) was the next logical storage interface.

PCIe slots directly connect to the CPU providing memory-like access and can run a very efficient software stack. However, early PCIe interface SSDs did not have industry standards nor enterprise features. PCIe SSDs leveraged proprietary firmware, which was particularly challenging for system scaling for various reasons, including: a) running and maintaining device firmware, b) firmware/ device incompatibilities with different system software, c) not always making best use of available lanes and CPU proximity, and d) lack of value-add features for enterprise workloads. The NVMe specifications emerged primarily because of these challenges.

What is NVMe?

NVMe is a high-performance, NUMA (Non Uniform Memory Access) optimized, and highly scalable storage protocol, that connects the host to the memory subsystem. The protocol is relatively new, feature-rich, and designed from the ground up for non-volatile memory media (NAND and Persistent Memory) directly connected to CPU via PCIe interface (See diagram #1). The protocol is built on high speed PCIe lanes. PCIe Gen 3.0 link can offer transfer speed more than 2x than that of SATA interface.

What is NVMe interface and controller
Diagram #1 CPU connected with SSDs via PCIe interface vs. I/O Controller and HBA

The NVMe Value Proposition

The NVMe protocol capitalizes on parallel, low latency data paths to the underlying media, similar to high performance processor architectures. This offers significantly higher performance and lower latencies compared to legacy SAS and SATA protocols. This not only accelerates existing applications that require high performance, but it also enables new applications and capabilities for real-time workload processing in the data center and at the Edge.

Conventional protocols consume many CPU cycles to make data available to applications. These wasted compute cycles cost businesses real money. IT infrastructure budgets are not growing at the pace of data and are under tremendous pressure to maximize returns on infrastructure – both in storage and compute. Because NVMe can handle rigorous application workloads with a smaller infrastructure footprint, organizations can reduce total cost of ownership and accelerate top line business growth.

NVMe Architecture – Understanding I/O Queues

Let’s take a deeper dive into NVMe architecture and how it achieves high performance and low latency. NVMe can support multiple I/O queues, up to 64K with each queue having 64K entries. Legacy SAS and SATA can only support single queues and each can have 254 & 32 entries respectively. The NVMe host software can create queues, up to the maximum allowed by the NVMe controller, as per system configuration and expected workload. NVMe supports scatter/gather IOs, minimizing CPU overhead on data transfers, and even provides the capability of changing their priority based on workload requirements.

The picture below (diagram #2) is a very simplified view of the communication between the Host and the NVMe controller. This architecture allows applications to start, execute, and finish multiple I/O requests simultaneously and use the underlying media in the most efficient way to maximize speed and minimize latencies.

How Do NVMe Commands Work?

The way this works is that the host writes I/O Command Queues and doorbell registers (I/O Commands Ready Signal); the NVMe controller then picks the I/O Command Queues, executes them and sends I/O Completion Queues followed by an interrupt to the host. The host records I/O Completion Queues and clears door register (I/O Commands Completion Signal). See diagram #2. This translates into significantly lower overheads compared to SAS and SATA protocols.

NVMe architecture
Diagram #2 – Simplified NVMe Architecture View

Why NVMe Gets the Most Performance from Multicore Processors

As I mentioned above, NVMe is a NUMA-optimized protocol. This allows for multiple CPU cores to share the ownership of queues, their priority, as well as arbitration mechanisms and atomicity of the commands. As such, NVMe SSDs can scatter/ gather commands and process them out of turn to offer far higher IOPS and lower data latencies.

Why is NVMe Important for your Business?

Enterprise systems are generally data starved. The exponential rise in data and its evolving demands create new challenges. Even high-performance SSDs connected to legacy storage protocols can experience lower performance, higher latencies, and poor quality of service when confronted with some of the new challenges of Fast Data. NVMe’s unique features help to avoid the bottlenecks for everything from traditional scale-up database applications to emerging Edge computing architectures and scale to meet new data demands.

Designed for high performance and non-volatile storage media, NVMe is the only protocol that stands out in highly demanding and compute intensive enterprise, cloud and edge data ecosystems. Furthermore, new and unique features (which I will cover in my next blog) include capabilities like multiple queues, combining IOs, defining ownership and prioritization processes, multipath and virtualization of I/Os, capturing asynchronous device updates, and many other enterprise features that have simply not existed before. As we help businesses to transform themselves, NVMe lets you do more with data.

Read my next blog on the unique features of NVMe.

Leave a Response

Rohit Gupta
Rohit has more than 10 years of compute & storage industry experience in various capacities of increasing cross functional responsibilities.At Western Digital, he is responsible for Enterprise Segment Management - planning and executing strategies to run enterprise OEM business leveraging our product portfolio.Before the SanDisk acquisition, he led Technology Strategy at Enterprise Storage Solutions and delivered long term technology roadmap for NVMeOF, SDDC, and Load/ Store Memory. Prior to that he was PCIe SSD Product Line Manager at HGST and helped deliver the industry's first 4.8TB HHHL PCIe Add In Card. He worked with Freescale Semiconductor on highly compute intensive SOCs for communication networks before switching to the data storage industry.Rohit earned his engineering degrees from Indian Institute of Technology, Kanpur and National University of Singapore and MBA from Marshall School of Business, University of Southern California.