Five years ago, Artificial intelligence (AI) implementation was pretty rare. But today, the business world is awash with machine learning and AI experimentation. Data has become a vital part of almost every business operation and everyone is looking for ways to harvest data for insight and business results. At the heart of this transformation are data centers. New infrastructure is being devoted to AI projects, and the surge of data is demanding more intelligent operations and management. As we look at the rise of AI in the data center, here are some defining trends:
1) The Rise of Customized AI Chips
AI demands enormous computational power, and using general-purpose chips would be impossible, and outrageously expensive to scale. AI chips are specialized silicon chips designed to perform complex mathematical and computational tasks in a more efficient way. With most AI use cases today being very narrow, AI chips can be trained for a specific task such as pattern recognition, natural language processing, network security, robotics, and/or automation.
As AI continues to mature, capabilities will not only expand but the cost of implementation will also go down. AI will be taking on more use cases and be embedded in more devices. This trend will advance even further with RISC-V and other open-source technologies lowering the barriers to purpose-built “building blocks” that can focus on efficiency, performance, and scalability like never before.
2) The Move Towards “Auto” Everything
For IT teams the quest for greater efficiency is never-ending. To keep afloat amid the explosion of data and the complexity of diverse workloads, automation is essential for success.
On one hand, automation is a way of relieving the pressure from IT staff and freeing their time to more important projects. But automation is also key in helping AI take on more functions in the data center by removing tasks that rely on close human interaction. In the words of our Big Data Analytics Platform senior director: “Touch it twice? Automate it.”
Automation is what will help data centers make the journey to AI and move from being reactive to preventative, and ultimately, predictive.
3) We All Need Standards
More and more devices are going to see embedded intelligence. And while we often view the flow of data in a linear path between the endpoint device to the edge and the core/cloud, the reality is that we are moving towards an era with intelligence to everything. Different devices operate mutually with other devices in an ecosystem. These devices need to be able to “speak” to one another. An easy example is autonomous vehicles that will need a common “language” to communicate regardless of the car manufacturer and beyond vehicles themselves. The safety of autonomous driving is dependent on an ecosystem of smart traffic signals, road side units, pedestrian alerts, etc. Standardization and interoperability are key, and this will make an AI/ML ecosystem easier to integrate and deploy at the edge.
4) The Data Scientist Turns Virtual
There are simply not enough data scientists in the world to support the growth of machine learning workloads. It may sound like an oxymoron, but AI can help to manage AI.
By expanding existing tools and building a self-service platform, AI technology can be made accessible to more people in the business. Whether it’s software engineers, subject matter experts or even doctors, given a few homegrown skills and support, more stakeholders should be able to generate predictive, AI-based analysis. To some extent, anyone in an organization should be able to fulfill the baseline role of the data scientist.
At Western Digital, we’ve built the Big Data Analytics Platform as a key enabling platform that can host a multitude of data and analytics environments. The open platform approach enables data scientists to create repeatable and scalable solutions, and more stakeholders can take advantage of its self-service architecture.
5) AI Permeates Data Center Operations
As data grows and applications become more complex and diverse, the data center is desperate for efficiency improvement. Some go as far to say that without AI, many data centers will not be economically or operationally viable.  Some of the ways AI tools will assist is by improving resource and service provisioning, cooling and power optimization, and by detecting more cyber threats. Like with most things AI, the goal is to find the optimal workflow with humans, automating intelligence where needed, and driving business strategy through IT adeptness. The most successful data centers will strategically deploy and pair human and AI capabilities across most operations, and deploy smarter, highly efficient and flexible infrastructure.
AI in the Data Center
AI in the data center is not a fantasy of human-like robots. It’s a viable technology penetrating every market and every vertical, improving processes, unearthing insights, and powering many apps and features we use today. For us, too, AI has become business-critical. It plays a vital role in global manufacturing processes across 17 factories. These manufacturing applications target diverse areas such as a Digital Twin to simulate fabrication processes, image analysis for rapid fault detection, predictive maintenance to increase equipment uptime, accelerating test time through predictive adaptive test, alongside many other applications.
AI is a business discipline. It often requires experimentation beyond your comfort zone, but it is a cornerstone technology that should be part of futureproofing you business strategy and the data center.
- The Industry 4.0 Transition – Architecting for AI, ML and IoT
- How to Leverage NVMe™ for AI & ML Workloads
- 5 Reasons Enterprises are Finding it Hard to Adopt AI
JuneAn Lanigan is Global Head of Enterprise Data Management at Western Digital, with over 25 years leading business and IT teams.
As 5G rolls out across the world, it will offer the next generation of mobile connectivity experience powered by a dramatic increase in speed and low latency. It will also open up possibilities for new mobile, gaming, health, edge and industrial applications – with a lot of hype and expectation. While we tend to associate 5G’s impact on our own personal devices, like phones and tablets, one of the areas where we’ll likely see the most significant impact will be automotive.
As transportation marches towards autonomous vehicles (we’re not quite there, but we’re getting closer), connected cars are beginning to look more and more like a small data center. These data hubs will be continuously pushing, and receiving, different kinds of data through the 5G network, with automotive safety as the primary goal. Here are three examples:
1. Next-Generation Maps
We’re already used to our maps being updated with live traffic conditions, but maps will evolve to absorb and deliver much more information in real-time. As we look into the near future, vehicles will amass abundant sensors and cameras, and they will be constantly surveying the road and their surroundings. When a car will recognize something different in the environment, such as new roads, construction, a change in the number of lanes or lane location, it will send that information to the cloud. There, a central database will be updated, and the information will be pushed out to other vehicles, and potentially others in the road network (like pedestrians and the roadside infrastructure) in almost real-time.
2. C-V2X Alerts
V2X (Vehicle-to-Everything) communication is the technology that allows cars to communicate with the different parts of the traffic system that may affect the vehicle, and vice versa. This includes vehicles, infrastructure and pedestrians.
Until 5G infrastructures have sufficient coverage, and are tested to meet automotive standards, V2X will be rolled out as DSRC (Dedicated Short Range Communication). DSRC allows vehicles to communicate with low latency (<100ms) directly with other vehicles or roadside units (RSU) with a line of site range of up to 300m. While this is already a huge evolution from where we are today, with 5G, we’ll see C-V2X (Cellular V2X) that will take this to a whole new level.
5G technology will allow cars to use the direct PC5 interface (where a device can communicate with another device over a direct channel) as well as the network Uu interface, which uses cell towers for radio access network. We’ll see even lower latency with 5G, the ability to communicate up to 600m for PC5 direct communication, and up to 2km with Uu. This means V2X will be able to include far more moving parts in the vehicle’s view, and more time (through larger distance) to react to road conditions ahead such as accidents, lane closures, icy or wet roads, and debris.
Furthermore, the PC5 interface also allows the vehicle to not only communicate with other cars and RSUs, but it will also have the potential to communicate with pedestrians and cyclists via their cell phones and potentially other devices. Why is this important? It could prevent accidents resulting from blind corners or people entering the street between cars. Both the vehicle and pedestrian/cyclist would be notified and can take appropriate action.
3. Software Updates and Services
The third safety feature that will be enhanced by 5G is over-the-air (OTA) software updates. While this exists already today on some vehicles, it will become standard on all vehicles in the future. As we look at the cars of the future, the number and type of applications, sensors and cameras will continue to increase, and they will all rely on interdependent services. All these applications will need to regularly be maintained and updated.
The algorithms behind autonomous and smart vehicles will continue to learn and mature as they capture data and push it out to cloud data centers for analysis. As these software elements get smarter, they will then be updated and dispatched over 5G through the vehicles OTA module as part of the telematics gateway. These type of updates will become as common, and likely as frequent, as we have been accustomed to with our smart phone updates.
Enhancing Automotive Safety with Storage
How does data storage play a role in all of this? For one thing, maps, V2X security keys, application software, data logging, OTA buffering and the millions of lines of software code that already exist in vehicles are, and will continue to be, stored in the vehicle on NAND flash-based products.
As cars become more data-rich, we need to ensure data will move optimally and reliability between the different auto systems at all times. That’s why we’re focused on advancing NAND-based storage and features to further alleviate performance bottlenecks, increase capacity to handle more on-board data and deliver higher reliability in automotive use cases (such as vibration, heat, cold weather, etc.).
With V2X going far beyond just the vehicle, storage will also be vital in RSUs, smart city devices such as traffic signals and the edge and cloud data centers that gather and process this ecosystem of data. But how do all these pieces work together? Last month joined the Automotive Edge Computing Consortium (AECC) to help drive open distributed computing infrastructure for connected vehicles together with mobile network operators, automotive manufacturers, and communication leaders.
While 5G will enable massive automotive data to move quickly with low latency, our job is to build the optimal foundation for data to be captured, preserved, accessed and transformed, so drivers and passengers will be safer on the road.
Webinar: Storage Design for Connected and Autonomous Cars
Understanding the right data storage solution is key to enabling the connected and autonomous cars of today and tomorrow.
If you want to learn more about design considerations the latest storage solution trends, join us for this upcoming webinar by Automotive World.
Russell is deeply involved in developing/promoting solutions to expand use of NAND flash as automotive industry moves to autonomous drive.
NVMe™ is the protocol of today, but it’s also the foundation for next-gen IT infrastructure. End-to-end NVMe is likely in your future, but when and how do you take the next step? New solutions on the market point to why it’s time to take advantage of NVMe and reap the benefits of NVMe-oF™ today – here are five reasons.
1. NVMe Lets You Take Off the Ski Boots and Sprint in Running Spikes
NVMe in itself is a performance gamechanger. Flash storage has always been faster than disk drives, but since it has been deployed in mainstream data center environments it has always been held back by the interface. SAS and SATA were a good starting point because it enabled SSDs to look like the disks they were replacing. But those interfaces were designed for disk and simply can’t handle the performance capability of flash storage. It’s a bit like asking an Olympic sprinter to wear ski-boots.
The introduction of the NVMe interface for SSDs was the next step. It is designed for flash with increased bandwidth, efficiency and parallelism that can exploit the inherent low latency of NAND.
NVMe SSDs are undergoing continuous improvements, and the standard is regularly enhanced with new features and specifications (such as ZNS). Our latest Ultrastar® DC SN840 is our third-generation solution with a vertically integrated in-house NVMe controller, firmware and 96-layer 3D TLC NAND technology. With low latency and dual-port high availability, it’s a future-ready solution that lets you power new, data-intensive applications.
2. NVMe-oF: Extending NVMe Performance Outside the Server
In a data center environment, NVMe SSDs are doing a great job accelerating server workloads, but there’s a problem. To benefit from the speed of NVMe, the SSDs need to sit on the PCIe bus, close to the processors or locally attached. The PCIe bus cannot be extended outside the server and so while each server can be individually accelerated, that leads to mini silos of SSDs that cannot be easily shared between hosts.
Enter NVMe-over-Fabrics, or NVMe-oF, the next step in data center infrastructure improvement. NVMe-oF allows NVMe-based storage to be shared between hosts at comparable performance to locally-attached SSDs.
So how fast is it? Today we announced not only our latest enterprise-class NVMe SSD the Ultrastar® DC SN840 but also an all NVMe JBOF (Just a Bunch of Flash) the OpenFlex™ Data24 NVMe-oF Storage Platform. Our SAS-based Ultrastar 2U24 Flash Storage Platform was always considered very fast with 4.7M IOPS, 25GB/s bandwidth and sub-millisecond latency. While we don’t yet have final performance numbers for the OpenFlex Data24, the early projections are outstanding. Based on our testing in the lab so far, we should see performance up to 13.2M IOPS, 70.7GB/s bandwidth and as little as 42 microseconds of write latency. That is the power of NVMe applied to storage that can be shared by multiple hosts.
3. Faster or Less Expensive – Pick Two
We’re used to more performance costing us extra. But with OpenFlex Data24, that idea is history.NVMe-oF supports significantly higher performance-intensive workloads at a lower price. In fact, we are projecting around 17% savings compared to a SAS JBOF. This stems from our vertical integration and silicon-to-systems mindset. I mentioned the new SSDs are vertically integrated from raw NAND to controller and firmware, but the entire OpenFlex Data24 is an in-house design, including the RapidFlex™ NVMe-oF controllers and ASICS.
4. Advanced Connectivity Options Make Adoption Easier
A JBOF is a fairly common approach to sharing storage, where multiple servers can share the resource. Storage can be allocated and reallocated according to the needs of the applications. And, some ways to do this are easier than others.
The OpenFlex Data24 is 2U enclosure that holds up to 24 DC SN840 NVMe SSDs for a raw capacity up to 368TB. It also contains up to six RapidFlex NVMe-oF controller cards. These cards offer several advantages for connectivity, including ultra-low latency, 100Gb Ethernet (screaming performance that you can likely already leverage today), and, low-power.
Up to six hosts can be directly attached without a switch, but the connectivity increases dramatically with a switched fabric for greater flexibility and maximum utilization.
5. Your Stepping Stone to Composability
We also founded the Open Composability Compatibility Lab (OCCL) with other industry leaders to promote the adoption and interoperability across the fabric-attached device eco-system. You too can participate, just go to opencomposable.com.
From NVMe to NVMe-oF: It’s Time
The OpenFlex Data24, its NVMe-oF architecture, and the Ultrastar DC SN840 NVMe SSDs, offer great reasons for you to consider your next tech refresh and improve costs and efficiency by sharing SSD investment across servers. It’s time to kick off the ski boots and lace up your running shoes!
Considering NVMe-oF? Resource that Can Help
- Webinar: Learn all about the OpenFlex™ Data24 NVMe-oF™ Storage Platform
- Blog: Enabling Business with NVMe-oF™
- Survey Says End-to-End NVMe™ is in Your Future
- Get to know our portfolio of NVMe SSDs
- Read the OpenFlex Data24 Product Brief
- Understand the benefits of RapidFlex A1000
 One terabyte (TB) is equal to one trillion bytes. Actual user capacity may be less due to operating environment.
Steve specializes in Data Center Systems marketing at Western Digital.
Continue checking here for updates regarding our WD Red NAS Drives
June 23, 2020
WD Red for NAS – Now More Choices for Customers
We want to thank our customers and partners for your feedback on our WD Red family of network attached storage (NAS) hard drives. Your real-world insights shared through in-depth reviews, blogs, forums and from our trusted partners are directly contributing to our work on an expansion of models and clarity of choice for customers. Please continue sharing your experiences and expectations of our products, as this input influences our development.
Due to the fact that the range of use cases for NAS has become increasingly diverse, we are now making it easier for users to match the right drive with their applications and workloads – from moderate small office/home office (SOHO) workloads to intensive small- and medium-business (SMB) use, as well as more demanding environments.
The WD Red Family
Here’s a breakdown of our products for NAS use-cases:
• Our current device-managed shingled magnetic recording (DMSMR) (2TB, 3TB, 4TB, and 6TB) WD Red series will be the choice for the majority of NAS owners whose demands are lighter SOHO workloads.
• WD Red Plus is the new name for conventional magnetic recording (CMR)-based NAS drives in the WD Red family, including all capacities from 1TB to 14TB. These will be the choice for those whose applications require more write-intensive SMB workloads such as ZFS. WD Red Plus in 2TB, 3TB, 4TB and 6TB capacities will be available soon.
• Our WD Red Pro (CMR 2TB to 14TB) series for the highest-intensity usage remains the same.
The Right Drive for SOHO Users
From our experience, we see most SOHO users rely on their systems for office file sharing, home backup or content archiving. Throughput and idle time are key considerations in these types of SOHO workloads. As explained in our post on DMSMR, as well as in media reviews, these drives prefer idle time to perform background operations, without which the drive may take longer to complete a command. Our use-case analysis shows that SOHO workloads typically are based on short periods of access to the drives. This results in extremely low average throughput (compared with the drive’s available throughput) and provides plenty of idle time for the DMSMR drive to perform the necessary background operations, making it an ideal fit for this application.
From a sequential performance perspective, our tests confirm that our WD Red DMSMR drives are on par with our existing CMR drives. Third-party testing also validates the performance of WD Red DMSMR drives compared with other drives under general hard drive benchmarks used in an NAS environment.
In a RAID rebuild scenario using a typical Synology or QNAP (non-ZFS) platform, WD Red DMSMR drives perform as well as CMR drives or show slightly longer RAID rebuild times, depending on the condition of the drive and extent of rebuild required. While test results can vary from one methodology and test bed to the next, we acknowledge that in some cases DMSMR, for the idle-time reasons covered earlier, can result in slower rebuild times.
For Users with Workload-intensive Applications and ZFS: CMR
The explosion of data seen today has spawned a spectrum of NAS uses cases, as well as increasingly demanding applications. One of those includes use of ZFS, an enterprise-grade file system. The increased amount of sustained random writes during ZFS resilvering (similar to a rebuild) causes a lack of idle time for DMSMR drives to execute internal data management tasks, resulting in significantly lower performance reported by users. While we work with iXsystems on DMSMR solutions for lower-workload ZFS customers, we currently recommend our CMR-based WD Red drives, including WD Red Pro and the forthcoming WD Red Plus.
In addition to taking customer and partner feedback seriously, we conduct in-house testing on the WD Red family of drives for compatibility, performance, endurance and other factors. These drives typically have been validated for compatibility on many platforms from NAS manufacturers such as Synology, QNAP, Asustor, Buffalo, Netgear and Thecus. The DMSMR drives met all of our test requirements, and we’re actively working with system makers like Synology to ensure use cases are validated for customers.
As a leader in HDD and flash technologies, we are committed to addressing the evolving needs of our customers and to offering the right technology for each implementation. This philosophy launched WD Red drives years ago, and they have been a leader in their field ever since. We continue engaging customers and partners and analyzing real-world data to offer a family of WD Red NAS drives – from HDDs to SSDs – that serve all workloads and applications.
April 22, 2020
The past week has been eventful, to say the least. As a team, it was important that we listened carefully and understood your feedback about our WD Red NAS drives, specifically how we communicated which recording technologies are used. Your concerns were heard loud and clear. Here is that list of our client internal HDDs available through the channel:
|WD Red||WD Red Pro||WD Blue||WD Black||WD Purple|
|3.5″||1TB or below||CMR||CMR||CMR||CMR||CMR|
|2TB – 6TB||SMR||CMR||SMR / CMR||CMR||CMR|
|8TB and above||CMR||CMR||–||–||CMR|
|2.5″||500GB or below||–||–||CMR||CMR||–|
Click here for SKUs to our client internal HDDs using SMR technology.
We’re committed to providing the information that can help make an informed buying decision for as many uses as possible. Thank you for letting us know how we can do better. We will update our marketing materials, as well as provide more information about SMR technology, including benchmarks and ideal use cases.
Again, we know you entrust your data to our products, and we don’t take that lightly. If you have purchased a drive, please call our customer care if you are experiencing performance or any other technical issues. We will have options for you. We are here to help.
More to come.
April 20, 2020
Recently, there has been a discussion regarding the recording technology used in some of our WD Red hard disk drives (HDDs). We regret any misunderstanding and want to take a few minutes to discuss the drives and provide some additional information.
WD Red HDDs are ideal for home and small businesses using NAS systems. They are great for sharing and backing up files using one to eight drive bays and for a workload rate of 180 TB a year. We’ve rigorously tested this type of use and have been validated by the major NAS providers.
We typically specify the designed-for use cases and performance parameters and don’t always talk about what’s under the hood. One of those innovations is Shingled Magnetic Recording (SMR) technology.
SMR is tested and proven technology that enables us to keep up with the growing volume of data for personal and business use. We are continuously innovating to advance it. SMR technology is implemented in different ways – drive-managed SMR (DMSMR), on the device itself, as in the case of our lower capacity (2TB – 6TB) WD Red HDDs, and host-managed SMR, which is used in high-capacity data center applications. Each implementation serves a different use case, ranging from personal computing to some of the largest data centers in the world.
DMSMR is designed to manage intelligent data placement within the drive, rather than relying on the host, thus enabling a seamless integration for end users. The data intensity of typical small business/home NAS workloads is intermittent, leaving sufficient idle time for DMSMR drives to perform background data management tasks as needed and continue an optimal performance experience for users.
WD Red HDDs have for many years reliably powered home and small business NAS systems around the world and have been consistently validated by major NAS manufacturers. Having built this reputation, we understand that, at times, our drives may be used in system workloads far exceeding their intended uses. Additionally, some of you have recently shared that in certain, more data intensive, continuous read/write use cases, the WD Red HDD-powered NAS systems are not performing as you would expect.
If you are encountering performance that is not what you expected, please consider our products designed for intensive workloads. These may include our WD Red Pro or WD Gold drives, or perhaps an Ultrastar drive. Our customer care team is ready to help and can also determine which product might be best for you.
We know you entrust your data to our products, and we don’t take that lightly. If you have purchased a WD Red drive, please call our customer care if you are experiencing performance or any other technical issues. We will have options for you. We are here to help.
Whether in your pocket, home, car, or the cloud, it's likely Western Digital is with you every step of the way.
June is definitely a month for pride. Yes, because of the Pride celebrations held worldwide. But even more, due to the significant LGBTQ+ milestones achieved during this month over the years. The Stonewall Uprising for LGBTQ+ rights in Manhattan happened in June 1969. In 2013, strides were made for same-sex marriage. In 2015, same-sex marriage was fully legalized in the U.S.
This June, in a historic decision, the U.S. Supreme Court ruled that the 1964 Civil Rights Act protects gay, lesbian, and transgender employees from discrimination based on sexual identity.
We’re thrilled because, at Western Digital, we think it’s important to show up for the community. We thrive because we are different. We believe no one should be discriminated against for being who they are. It’s our diversity that gives us our daring, our strength, our advantage—the edge to our innovation. That’s why we are honored to voice our support for the #EqualityAct.
Helping Pride Shine Everywhere
The #EqualityAct is proposed federal legislation that would amend existing civil rights laws to provide full protections for LGBTQ+ employees throughout the US. Western Digital supports this legislation because we feel strongly that everyone should be treated equally. Our current policies are already consistent with the proposed #EqualityAct, as we do not condone bias, discrimination, or inequality at our workplace, or anywhere else.
Although this month’s U.S. Supreme Court decision amends the Civil Rights Act to protect the LGBTQ+ community from employer discrimination, Congress still needs to pass the #EqualityAct to extend such protections beyond company walls.
Tech Fundraiser for LGBTQ+ Youth
This June, to celebrate Pride, we are excited to launch our first-ever limited-edition rainbow USB Cruzer Snap drive. For every one of these drives sold through the end of August, Western Digital will donate $6 (a minimum of $10,000 total) to The Trevor Project, which provides suicide prevention and crisis intervention for LGBTQ+ young people.
Brought to life by employees from Western Digital’s LGBTQ+ community (We.Equal) we hope this colorful drive allows users to #ShareYourPride year-round by saving and sharing equally vibrant, authentic, and proud moments. As one of the largest stewards of data in the world, we know memories are crucial and document history in the making.
#ShareYourPride and Win a Prize!
We at Western Digital envision a better world that is fueled by diversity, inclusion, and equity. We want to celebrate this with you. On Saturday, June 27, we’ll be giving the first 50 people who tweet #ShareYourPride, and include an image about how they celebrate pride, at @WesternDigital on Twitter. If you are among those selected we will contact you via Twitter to provide you with a code to redeem your drive. They’re designed with pride, for Pride!
Let’s come together to celebrate over fifty years of Pride. We are excited about the momentum gained for the #EqualityAct and look forward to many more milestones in the future.
Learn more about our tech fundraiser to support LGBTQ+ youth
Follow us to stay updated on our #ShareYourPride campaign
All things data with news and insights on systems and technology that help you capture, preserve, access and transform your data.
Contributors to this blog include Matias Bjørling, Jorge Campello De Souza, Dave Landsman, Damien Le Moal, and Ted Marena.
We at Western Digital are very concerned about how to architect data infrastructure solutions for zettabyte scale. The demands of applications from IoT, automotive, video creation and surveillance mean that data center systems of this capacity will be a requirement in the not too distant future.
We’ve been working on technologies that create greater efficiencies for massive datasets through the Zoned Storage initiative. Zoned Storage is a framework for intelligently placing data on a device, and is an open-source, standards-based initiative to enable data centers to scale efficiently for the zettabyte storage capacity era.
A set of standards make up Zoned Storage, ZBC (Zoned Block Commands) and ZAC (Zoned ATA Command Set) for SMR (Shingled Magnetic Recording) HDDs and ZNS (Zoned Namespaces) for NVMe™ SSDs. The unifying zoned block interface for both HDDs and SSDs enables software developers and data center architects to realize the promise of Zoned Storage (capacity, costs, and endurance).
In this blog we’d like to focus on ZNS because there are significant milestones which were recently achieved that are particularly of interest for data center architects and developers.
NVMe Specification Ratification – ZNS Is Official
The Zoned Namespace (ZNS) Command Set specification has been ratified by the NVM Express consortium. The specification is available for download under the Developers -> NVMe Specification section of the www.nvmexpress.org public web site, as an NVM Express 1.4 Ratified TP.
With an approved standard, ZNS-based NVMe SSDs are poised to become an integral part of the Zoned Storage device ecosystem, complementing SMR HDDs. By enabling the sequential zoned storage model, ZNS allows the host and the SSD to coordinate data placement onto the SSD, providing higher write endurance and improved I/O access latencies, while enabling technologies such as QLC NAND to proliferate.
Software Upstreaming – Software Development Goes Public
Zoned Storage initial support in Linux® was introduced with the kernel version 4.10. Later kernel versions extended Linux zoned block device interface with new features and added support to various kernel components such as device-mapper drivers. We’ve been working with the open source community to integrate ZNS support in Linux to ensure that NVMe ZNS devices are compatible with the Linux kernel zoned block device interface. The Linux kernel modifications for ZNS have recently been publicly released on the developer’s mailing lists. These changes are expected to be accepted for the next Linux kernel version.
Enabling ZNS support in the Linux kernel is the first step. Modifications to well-known user applications and tools, such as RocksDB, Ceph, and the Flexible IO Tester (fio) performance benchmark tool, together with the new libzbd user-space library, are also being released.
To see further details on the software support, visit https://zonedstorage.io
Adoption and Ecosystem
The ZNS ecosystem is growing rapidly. Of course, Western Digital has been committed to Zoned Storage and ZNS, but many other organizations are now adopting this new standard, including public and private cloud vendors, all flash-array vendors, solid-state device vendors, and test and validation tool suppliers.
Even as ZNS Command Set specification introduces a new zoned storage block interface for SSDs, much of the software changes required to adopt the model is already mature due to the existing SMR HDD software eco-system, accelerating the adoption of ZNS SSDs. With a small set of changes to the software stack, users of host-managed SMR HDDs can deploy ZNS SSDs into their data-centers, and new adopters can take advantage of the existing software eco-system. Furthermore, they can utilize the existing tools to accelerate support in their applications.
The unifying zoned block interface for both HDDs and SSDs enables software developers to support a single interface, accelerating storage deployments, and ultimately taking advantage of the benefits of Zoned Storage (capacity, costs, and endurance).
Where Data Infrastructure is Headed
As data infrastructure is rapidly changing, the momentum is toward open, purpose-built, scalable solutions. NVMe is a technology that will be ubiquitous in data centers moving forward, and ZNS will be key in helping scale storage needs.
To learn more about ZNS and other next-generation storage solutions, join the Storage Solutions Meetup Group and attend the initial event on July 21st. Info here.
Forward-Looking StatementsCertain blog and other posts on this website may contain forward-looking statements, including statements relating to expectations for our product portfolio, the market for our products, product development efforts, and the capacities, capabilities and applications of our products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.
Thoughts, perspectives and Aha! moments from our R&D leaders and technology gurus on research, technology direction, and more.
The SD card foresaw the future. Here’s what it’s telling us now.
When Micky Holtzman and I were asked to work on a new solution combining SanDisk’s memory production, our knowledge in MMC and developing a flash controller, I had no idea how far this project would take us. It was 1999, and flash memory was expensive and hard to compete with storage technologies of the time like floppy disks (believe it or not, more than five billion were sold per year worldwide around that time). We worked in utter secrecy, traveling the world and meeting with our partners at Toshiba and Panasonic as if we were covert agents. SanDisk’s Engineering and SD card-related strategy at that time was led by Yoram Cedar.
The team driving the concept of the SD card had a vision of what this new standard could power. Every aspect of the design was important – from what it could support, to how you insert it into a device, to how the communications worked. It wasn’t just about creating a technology, it was about how to turn an idea into a commercial product.
This was the turn of the century. The first mobile phones with a camera had just been introduced (albeit with 0.35-megapixels, and photos weren’t viewable on the device itself). And, while we could play “Snake”, things like digital music players, social networking, ubiquitous video chat capabilities – let alone, smartphones – didn’t even exist yet.
The idea for the SD card was about foreseeing the coming of small handy electronic devices and the value that such small memory devices may enable. Devices like: digital music players, tablets, high-resolution digital cameras, personal video cameras, smartphones, and more.
This poster from twenty years ago might be funny in how we thought smartphones would look, but it was truly visionary as to exactly where technology would head over the next two decades.
It was also about understanding the power of a standard. At that time, the memory card marketplace was a confusing mix of largely proprietary cards. A card for one device could not be used for another one. But we were able to change that.
Through the formation of the SD Association (SDA), celebrating its 20th anniversary this year, and its dedicated fast-growing number of members worldwide, the SD memory card became the de facto leading standard for the next generation of digital media.
What Does the SD Card Tell Us 20 Years Later?
Now, 20 years later, where is technology headed? Is the SD card still relevant for the next decade of technology? What’s next?
First, we hear it often, but the impact of the growth of data cannot be overstated. Billions of connected devices are expected to generate 79.4 zettabytes (ZB) of data in 2025. Furthermore, the capabilities of 5G networks promise to open up a new realm of possibilities, but it also will require overcoming some of the bottlenecks of supporting hardware. On the storage end, that means delivering even larger capacities and higher-performance capabilities.
Twenty years ago we could not imagine the type of applications that would become omnipresent in our lives, like social media, or how many images we would each create and consume on a daily basis. But we knew something big would be coming through the evolution of highly condensed and affordable memory. The first SD card had 8MB of memory. Now, with the evolution of 3D NAND scaling, people can store up to 1TB of data on microSD™ memory cards – almost a 125,000x capacity increase – due to our memory technology scaling and packaging capabilities . Speed, too, has increased nearly 100 times.
Similarly, there are emerging technologies that will make great strides into our lives over the next few years. Among them are autonomous vehicles, Artificial Intelligence, VR/AR, high-resolution gaming, and multi-channel IoT devices.
All of these will require new, high-speed memory interfaces and multi-channel operations to come to life.
The SD Card Reinvents Itself
In anticipation of the next generation of mobile experiences, the SD Card has undergone an interesting transformation. A new standard was created last year, SD Express and microSD Express, that added PCIe®technology and the NVMe™ protocol to the popular SD card.
The first result is monstrous performance. As you can see in the graph below, the latest iteration offers 40x the performance of UHS–I devices. This is where we get a glimpse into the future. Such performance capabilities allow these tiny SD memory cards to serve as full-fledged removable Solid State Drives (SSD) while keeping backward compatibility to billions of existing SD card slots in the market. This also opens up new possibilities for smaller, lighter, mobile devices for next generation applications (like small drones that can capture multiple high-res camera feeds).
Raising the Tide
Data is getting bigger, and we need to move it faster. Wearables, AR, multiple camera feeds, drones, personal computing expansion, smart home devices, automotive, AI and IoT are just some of the areas we’ll see this new standard bring data to life. Just like the SD card foresaw what was coming, these new SD Express and microSD Express standards are raising the tide for higher performance possibilities, and they, too, will open up new use cases for internal and removable storage.
A little over five years ago, the President of the United States held a SanDisk SD Card as he honored former SanDisk CEO, Eli Harari, as a recipient of the National Medal of Technology and Innovation. I wonder what technology we’ll see on that stage in ten years’ time.
Watch the first microSD Express demo.
Learn about our embedded and removable storage devices.
Watch Eli Harari as he spoke to reporters after he was awarded the National Medal of Technology and Innovation for his work with SanDisk to develop flash memory storage technology: https://www.c-span.org/video/?322856-3/national-medal-science-technology-winners-stakeout
 SD & microSD memory cards – The world’s first choice in memory cards – 20 years of innovation. https://www.sdcard.org/press/whatsnew/SD_microSDMemoryCardsTheWorldsFirstChoiceInMemoryCards_20YearsOfinnovation.pdf
 SDA 20th Anniversary Infographic. https://www.sdcard.org/press/whatsnew/SDA20thAnniversary_Infographic.pdf
Certain blog and other posts on this website may contain forward-looking statements, including statements relating to expectations for our product portfolio, the market for our products, product development efforts, and the capacities, capabilities and applications of our products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.
SD and related marks and logos are trademarks of SD-3C LLC. © 2019-2020 SD-3C LLC. All Rights Reserved.
PCIe® is a registered trademark of PCI-SIG®.
NVM ExpressTM and NVMeTM are trademarks of NVM Express, Inc.
Yosi Pinto is Senior Technologist in the Standards Group at Western Digital and is Chairman of the Board for the SD Association.
As NVMe standards, technology and adoption evolve, we continue to update this article and expand this series. Since we last updated it in 2018, there have been a few developments. Let’s get started.
NVMe™ (Non-Volatile Memory Express) is a new protocol for accessing high-speed storage media that brings many advantages compared to legacy protocols. But what is NVMe and why is it important for data-driven businesses?
As businesses contend with the perpetual growth of data, they need to rethink how data is captured, preserved, accessed and transformed. Performance, economics and endurance of data at scale is paramount. NVMe is having a great impact on businesses and what they can do with data, particularly Fast Data for real-time analytics and emerging technologies.
In this blog post I’ll explain what NVMe is and share a deep technical dive into how the storage architecture works. Upcoming blogs will cover what features and benefits it brings businesses and use cases where it’s being deployed today and how customers take advantage of Western Digital’s NVMe SSDs, platforms and fully featured flash storage systems for everything from IoT Edge applications to personal gaming.
My work has been associated with data storage protocols, in some way or the other, for more than a decade. I have worked on enterprise PCIe SSD product management and long-term storage technology strategy, watching the evolution of storage devices from up-close. I am incredibly excited about the transformation NVMe is bringing to data centers, and the unique capability of Western Digital to deliver innovation up and down the stack. NVMe is opening a new world of possibilities by letting you do more with data! Here’s why:
The Evolution of NVMe
The first flash-based SSDs leveraged legacy SATA/SAS physical interfaces, protocols, and form factors to minimize changes in the existing hard drive (HDD)-based enterprise server/ storage systems. However, none of these interfaces and protocols were designed for high-speed storage media (i.e. NAND and/ or persistent memory). Because of the interface speed, performance of the new storage media, and proximity to the CPU, PCI Express (PCIe) was the next logical storage interface.
PCIe slots directly connect to the CPU providing memory-like access and can run a very efficient software stack. However, early PCIe interface SSDs did not have industry standards nor enterprise features. PCIe SSDs leveraged proprietary firmware, which was particularly challenging for system scaling for various reasons, including: a) running and maintaining device firmware, b) firmware/ device incompatibilities with different system software, c) not always making best use of available lanes and CPU proximity, and d) lack of value-add features for enterprise workloads. The NVMe specifications emerged primarily because of these challenges.
What is NVMe?
NVMe is a high-performance, NUMA (Non Uniform Memory Access) optimized, and highly scalable storage protocol, that connects the host to the memory subsystem. The protocol is relatively new, feature-rich, and designed from the ground up for non-volatile memory media (NAND and Persistent Memory) directly connected to CPU via PCIe interface (See diagram #1). The protocol is built on high speed PCIe lanes. PCIe Gen 3.0 link can offer transfer speed more than 2x than that of SATA interface.
The NVMe Value Proposition
The NVMe protocol capitalizes on parallel, low latency data paths to the underlying media, similar to high performance processor architectures. This offers significantly higher performance and lower latencies compared to legacy SAS and SATA protocols. This not only accelerates existing applications that require high performance, but it also enables new applications and capabilities for real-time workload processing in the data center and at the Edge.
Conventional protocols consume many CPU cycles to make data available to applications. These wasted compute cycles cost businesses real money. IT infrastructure budgets are not growing at the pace of data and are under tremendous pressure to maximize returns on infrastructure – both in storage and compute. Because NVMe can handle rigorous application workloads with a smaller infrastructure footprint, organizations can reduce total cost of ownership and accelerate top line business growth.
NVMe Architecture – Understanding I/O Queues
Let’s take a deeper dive into NVMe architecture and how it achieves high performance and low latency. NVMe can support multiple I/O queues, up to 64K with each queue having 64K entries. Legacy SAS and SATA can only support single queues and each can have 254 & 32 entries respectively. The NVMe host software can create queues, up to the maximum allowed by the NVMe controller, as per system configuration and expected workload. NVMe supports scatter/gather IOs, minimizing CPU overhead on data transfers, and even provides the capability of changing their priority based on workload requirements.
The picture below (diagram #2) is a very simplified view of the communication between the Host and the NVMe controller. This architecture allows applications to start, execute, and finish multiple I/O requests simultaneously and use the underlying media in the most efficient way to maximize speed and minimize latencies.
How Do NVMe Commands Work?
The way this works is that the host writes I/O Command Queues and doorbell registers (I/O Commands Ready Signal); the NVMe controller then picks the I/O Command Queues, executes them and sends I/O Completion Queues followed by an interrupt to the host. The host records I/O Completion Queues and clears door register (I/O Commands Completion Signal). See diagram #2. This translates into significantly lower overheads compared to SAS and SATA protocols.
Why NVMe Gets the Most Performance from Multicore Processors
As I mentioned above, NVMe is a NUMA-optimized protocol. This allows for multiple CPU cores to share the ownership of queues, their priority, as well as arbitration mechanisms and atomicity of the commands. As such, NVMe SSDs can scatter/ gather commands and process them out of turn to offer far higher IOPS and lower data latencies.
NVMe Form Factor and Standards
The NVMe specification is a collection of standards managed by a consortium, which is responsible for its development. It is currently the industry standard for PCIe solid state drives for all form factors. These include form factors such as standard 2.5” U.2 form factor, internal mounted M.2, Add In Card (AIC), and various EDSFF form factors.
There are many interesting developments happening on added features to the standard, like multiple queues, combine IOs, define ownership and prioritization process, multipath and virtualization of I/Os, capture asynchronous device updates, and many other enterprise features that have not existed before. My next blog goes into depth about these features and how they are opening up new possibilities for data-driven businesses.
We’re seeing the standard used in more use cases. One example is Zoned Storage and ZNS SSDs. NVMe Zoned Namespace (ZNS) is a technical proposal under consideration by the NVM Express organization. It came about to contend with massive data management in large-scale infrastructure deployments, by moving the intelligent data placement from the drive to the host. To do so it divides the LBA of a namespace into zones and that must be written sequentially and if written again must be explicitly reset. The specification introduces a new type of NVMe drive that provide several benefits over traditional NVMe SSDs such as:
- Higher performance through write amplification reduction
- Higher capacities by lower over-provisioning
- Lower costs due to reduced SSD controller DRAM footprint
- Improved latencies
Another interesting use case is the SD™ and microSD™ Express card, which marries the SD and microSD Card with PCIe and NVMe interfaces – see here. This is an example of the capabilities of the next generation of high-performance mobile computing.
Lastly, the NVMe protocol is not limited to simply connecting flash drives, it may also be used as a networking protocol, or NVMe over Fabrics. This new networking protocol enables a high performance storage networking fabric a common frameworks for a variety of transports.
Why is NVMe Important for your Business?
Enterprise systems are generally data starved. The exponential rise in data and demands from new applications can bog down SSDs. Even high-performance SSDs connected to legacy storage protocols can experience lower performance, higher latencies, and poor quality of service when confronted with some of the new challenges of Fast Data. NVMe’s unique features help to avoid the bottlenecks for everything from traditional scale-up database applications to emerging Edge computing architectures and scale to meet new data demands.
Designed for high performance and non-volatile storage media, NVMe is the only protocol that stands out in highly demanding and compute intensive enterprise, cloud and edge data ecosystems.
I hope this blog has helped you understand what NVMe is, and why it’s so important. You can continue reading my next blog that walks through some interesting NVMe features for edge and cloud data centers. Or, see our range of NVMe SSDs with low latency and maximum throughput.
Rohit has more than 10 years of compute & storage industry experience in various capacities of increasing cross functional responsibilities.
“A lot of people think that living in a refugee camp is temporary. They think you are there for two or three years and leave. I lived there for 22 years. And I had to think, “What can I do for myself and my community?”
So began the story of Lual Mayen, an award-winning developer and gaming studio founder with an incredible backstory. Lual was born in South Sudan, but a growing civil war led his parents to take him and his siblings and flee to Uganda. He was then raised in a refugee camp in northern Uganda, where he spent over two decades of his life.
In living with the lingering presence of violence, though, Lual found his passion and purpose. He decided to create immersive video games to promote peace and empathy – both in his country and around the world. And Lual saw technology as the bridge that would help him accomplish his dream. But, how could he learn to be a video game developer in a refugee camp with no public computers or Internet access? I sat down with Lual to talk about his gaming for good journey, and its unlikely connection to Western Digital.
Saving for Three Years for His First Computer
Before Lual could develop his groundbreaking video game, he had to learn the rules of game development. And before that, he needed a computer. His biggest supporter was his mother, and she started setting aside a portion of her earnings to help her son buy his first computer. Paycheck after paycheck, year after year. Three years later, she had saved the $300 needed for Lual to purchase a laptop.
Lual sees this gift as a moment that changed his life, describing, “It can’t go anywhere because it’s what started my route, my definition. No matter what I get today, that computer really defined what I’m going to be doing for the future and for the people in the world.”
Learning to Be a Video Game Developer
His next challenge was learning how to build his video game. To be a developer, Lual knew he needed to teach himself coding, graphic design, and other technical skills. The solution to his big problem was actually as small as a finger – a USB drive. During one of his 3-hour walks to a local city, Lual realized that he could load video game development lessons on a thumb drive, and bring them back to the camp to study. This would solve his lack of Internet connection by transporting data to his personal computer.
“So, I used to have my friend help me a lot. We’d sometimes go to the city and put [coding] tutorials on a SanDisk® USB drive. We could store like 20 tutorials and I’d put them on my computer. Then, we’d just listen to them. That helped me a lot to train myself and prepare my journey [as a developer].”
View this post on Instagram
Yesterday, we were honored to meet Lual Mayen, winner of the Global Gaming Citizen Award and head of @junubgames. He created a game called "Salaam," which focuses on conflict resolution and compassion. We need more games like this. Thanks for sharing your powerful and amazing journey with us!
A post shared by WD_BLACK (@wd_black) on
Launching Salaam to Support Real-Life Refugees
Now, as CEO of Junub Games, Lual is preparing for the launch of his studio’s video game, Salaam, in summer of this year. The third-person, runner game follows the story of a refugee fleeing from civil war in their home country. It is unique in that players can buy in-game items that donate supplies to real-world refugee camps. Salaam has already attracted worldwide media coverage and near universal praise. It continues Lual’s mission to help people understand what’s going on in the world, and make social impact on the global community through gaming for good. And it all started with a computer and a dream.
Learn More about Gaming for Good:
Michael is the Global Digital Marketing for Gaming Lead, maintaining the authenticity of WD_BLACK and the voice for the gaming community.
With the rise of Industry 4.0 Industrial IoT (IIoT) applications, storage is being approached in a new light. In the past, storage may have been acquired based on capacity needs or price alone. Yet today’s autonomous applications, and use cases, call for other considerations. As developers think about the unique environments of their industrial applications, they need to include storage early in the process so it can be optimized for the IIoT ecosystem. This is an important change in thinking for IIoT design.
The infographic below shares some of the considerations you should take when thinking of storage for IIoT use cases. Whether it’s a robotic arm in a factory or a drone monitoring agricultural fields, these machines operate under very different conditions, and will require the right storage solution to optimally handle the deluge of data they will generate, process, and consume.
The industrial data landscape is changing rapidly. As more industrial applications are trained to apply AI and ML algorithms to get real-time actionable insights, the need for versatile and durable storage solutions will continue to increase. To keep up, solution architects should have storage as a central focus when designing industrial applications with proactive planning.
The bottom line is that not all storage devices are created equal. If you want to learn more about the variety of storage solutions and how to better plan your IIoT storage strategy, join me or stream my webinar Indispensable – Understanding Storage for the Industrial Internet of Things.
Download a PDF of our infographic here.
Learn more about our IIoT solutions here.
Yaniv Iarovici is Western Digital’s Marketing Director of IoT and Edge.