Western Digital’s new OpenFlex architecture promises breakthrough levels of scalability, efficiency and performance. Here’s a look at some of the technologies enabling it, including NVMe-over-Fabric and the Kingfish™ open API.
Two weeks ago, at Flash Memory Summit, Western Digital unveiled the future of data infrastructure – the new OpenFlex architecture and product line as well as proposed open standards to address the ever-increasing demands of high-scale data centers.
Our OpenFlex framework is the foundation for open, Software Composable Infrastructure (SCI), a framework in which storage, compute and networking resources can independently scale. Software is used to orchestrate these resource pools into logical application servers, on-the-fly:
By more effectively using resources, SCI can satisfy the needs of complex and dynamic applications and data workflows in a far more cost efficient and agile manner than traditional hyperconverged architectures (improving efficiency by up to 40% – learn more here).
There are several key components that enable an SCI. The first is software to dynamically provision and manage the resources, the second is a network protocol to disaggregate the resources and make them shareable between multiple applications and servers, while the third is the physical hardware of fabric attached devices. In this blog I want to take a closer look at these three components and the key technologies behind OpenFlex: NVMe-over-Fabric and the Kingfish API.
NVMe + Fabric = Freeing Data From the Server
A recent change that is storming the data center is the adoption of NVMe based solutions. Unlike SAS and SATA protocols that were designed for disk drives, NVMe was designed from the ground up for persistent flash memory technologies and the massively parallel transfer capabilities of SSDs. As such, it delivers significant advantages including extreme performance, improved queuing, low-latency and the reduction of I/O stack overheads.
On the scalability end, PCIe-based NVMe devices had a challenge – servers couldn’t see a PCIe NVMe in another server because PCIe is based on a point-to-point topology.
The advent of networked storage, particularly SAN and NAS, meant storage could be shared by multiple servers for greater efficiency and flexibility. However, with NVMe, there was a return to the previous DAS (direct attached storage) approach. NVMe devices delivered superior performance but remained “siloed” within an individual server.
Until the recent introduction of NVMe-over-Fabric (NVMf).
NVMf is a networked storage protocol that allows NVMe flash storage to be disaggregated from the server and made widely available to concurrent applications and multiple compute resources. There is virtually no limit to the number of servers or NVMf storage devices that can be shared. It promises to deliver the lowest end-to-end latency from application to storage while delivering agility and flexibility by sharing resources throughout the enterprise.
NVMf is a key enabler of Western Digital’s SCI architecture. As networks can be slow, narrow in focus or too expensive, the most cost-effective bandwidth is within servers, which is why so much storage locally resides inside of them. PCIe-based storage devices are helping to drive this migration because the interface protocol enables more data lanes to be added, which in turn, delivers fast I/O performance with very low latency built into every server. And, since almost every device supports a PCIe interface, there are no drivers to install, making it an effective interface for composable infrastructures.
Our vision for the future of data infrastructure is open, scalable, disaggregated, and extensible.
The NVMf industry standard supports multiple transports including RDMA (NVMe/RoCE, a released open standard), FC (NVMe/FC, a released open standard) and TCP (NVMe/TCP, standardization underway). Our initial OpenFlex devices support 100Gb Ethernet for NVMe/RoCE.
In addition to enabling NAND flash media access over NVMf, Western Digital has also enabled disks to be accessed via NVMf for the first time so that all data center storage can be addressed in the same way. This opens new opportunities to efficiently scale using a modular set of NVMf storage devices.
OpenFlex F3000 Series Fabric Device – for high performance applications that demand real-time responsiveness, such as AI and IoT. The OpenFlex F3000 delivers low-latency NVMe flash performance over two 50Gb Ethernet ports and will be available in capacities up to 61.4TB.
OpenFlex E3000 Fabric Enclosure – This 3U10 enclosure houses up to ten hot swappable F3000 fabric devices for a potential combined capacity of up to 614TB.
OpenFlex D3000 Series Fabric Device – The self-contained 1U device will offer up to 168TB of disk capacity for Big Data applications, such as machine learning and data archiving.
Software Orchestration – Kingfish API
The second key component of SCI is the software orchestration layer. Here we took a different approach. As part of the OpenFlex architecture we announced the Kingfish API – a new, open API to dynamically provision and manage SCI resources, which we will make publically available.
The Kingfish API enables the flash and disk pools to be presented as software composable infrastructure that can be quickly and easily orchestrated into logical application servers. It is a RESTful API that builds upon existing industry standards such as Redfish® and Swordfish™ utilizing the best features of those standards as well as practices from other captive management protocols used in the industry.
Western Digital is leveraging a broad ecosystem to drive Kingfish as an open standard for SCI.
With a foundation that is open, scalable, disaggregated, and extensible, the OpenFlex architecture is a huge step toward data-centric architectures. OpenFlex SCI brings applications optimized levels of resources while providing organizations with greater agility, efficiency, and performance predictability at scale, so you can harness the power of data.
As used for storage capacity, one petabyte (PB) = one quadrillion bytes, one terabyte (TB) = one trillion bytes and one gigabyte (GB) = one billion bytes. Total accessible capacity varies depending on operating environment.
Certain blog and other posts on this website may contain forward-looking statements, including statements relating to expectations for our product portfolio, the market for our products, product development efforts, and the capacities, capabilities and applications of our products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.
Dave has over 20 years of experience in the enterprise storage, computing, and software business. He has a deep understanding of storage and ISV ecosystems required for business-critical IT infrastructures and has extensive experience in complete life-cycle software product management and product marketing at companies such as NetApp, Xilinx, and HP.At Western Digital, he is Director, Flash Storage Platform Marketing – a key, foundational element in the emerging Software-Defined-Storage (SDS) environment.