Our goal for the ‘Speeds, Feeds and Needs’ blog series is to make it a place for anyone interested in enterprise storage. Whether a CIO or a system designer, we want to address topics of interest to you in a fashion that makes it valuable to you. This overarching goal has led us to create the new “Speeds, Feeds and Needs” blog series. This series is designed for our audience members who have a keen interest in taking a deeper dive into the more technical aspects of storage.
Before delving into our first topic in this series – latency – let me first say that while we will cover technical topics, we will try to do so in a way where anyone can digest and gain value from the content. In other words, this won’t be a series of engineering papers!
To put simply, latency is the amount of time it takes to complete a given task. The most common example everyone is aware of is about application response time i.e. when you are using an application and request information to be processed or provided, latency is the term used for the amount of time it takes for the application to request the data and give you a response. Several popular hyper-scale companies have publicly stated that latency matters. Amazon has reported that every 100ms of latency cost them 1% in sales. Likewise, Google found an extra .5 seconds in search page generation time dropped traffic by 20%. No wonder service level agreements for application owners often explicit spell out latency requirements for acceptable end user performance. Storage plays a critical role because the information being processed and requested is typically persisted on some form of non-volatile storage – this is true for both hard drives and SSDs, and the amount of time can differ for a variety of reasons, of which we will discuss a couple below.
Now back to our planned post…
Our recent string of announcements, including the ULLtraDIMM, CloudSpeed, Lightning and Optimus drives, have led to a lot of customers, partners and other passionate data center aficionados coming to us seeking to understand when they should deploy each solution, and why. In most cases, people tend to think they only need to look at performance and capacity. That view comes from how storage has been looked at historically – hard drives were all fairly close to each other when it came to latency. In the case of hard drives, the IO time is a combination of ‘rotational latency’ (the time a disk platter takes to rotate to the right sector), ‘seek time’ (the time to move disk arm to the right cylinder) and finally, the ‘data transfer time’ to actually read or write the data. Because the rotational latency and seek times are quite significant, applications, and in turn operating systems, have focused on minimizing these, often opting for sequential IO when possible. This is not the case for SSDs. As a non-volatile memory, SSDs eliminate rotational latency and seek times, and consequently, dramatically improve random IO by multiple times.
Today’s Traits of Latency Levels
When it comes to SSDs, however, latency can still become a factor, varying significantly based on the interface and where the flash sits in the infrastructure. For instance, flash-based devices using the SAS and SATA interfaces have an order of magnitude higher latency that a PCIe drive does, coming in around 500-600 microseconds and 50-70 microseconds respectively. For some applications, having this much lag within the system isn’t a problem because real-time access to large data sets isn’t required to achieve the necessary output for the business. However, for virtualized environments or cloud computing, that lag time can quickly bring things to a crawl impacting the end users’ experience. In those scenarios PCIe works better. That being said, even 50-70 microsecond latency can be too much, like with High-Frequency Trading (HFT) or Big Data analytics, where a few seconds of lag time can mean millions of dollars in lost revenue. Our ULLtraDIMM, a DDR3-based flash device that achieves an astonishing 5 microsecond write latency, is much more appropriate for these types of applications.
Another item that comes into play due to the interface is latency consistency. SAS and SATA devices have multiple gates they must go through in order to go through their complete input/output operation, specifically the I/O hub. Because there are only so many lanes contention is introduced by the I/O hub, causing data to build up and make its way through more slowly. This leads to latency peaks and valleys. PCIe devices avoid the I/O hub, however have a PCIe hub introduced into the data path, which can lead to some spikes as well. Devices connecting through via DDR3 remove both of these points of contention, instead communicating directly through the memory bus. This results in a much more consistent, horizontal latency output.
Matching SSDs with Application Performance
To sum up, latency is often overlooked when deploying SSDs in your environment. However, what type of SSD you use can have a tremendous impact on the actual application performance you see. It is important to first understand what applications you are looking to enhance with flash and how quickly and consistently they need data. Knowing this will not only help you narrow your search to a smaller group of SSDs, but also make sure you achieve the performance improvement you should.
I hope you enjoyed the first post of our new “Speeds, Needs and Feeds” series. We will be delving into many more topics in the future, so be sure to check back often for more of the same if you are interested in learning more of the ins and outs of flash and storage technologies so you can make more informed decisions later.
If you have any feedback on how “Speeds, Feeds and Needs” series can help meet your needs and answer your questions, tweet me at @HemantGaidhani and join the conversation with @SanDiskDataCtr!
Hemant has extensive experience in product management and marketing, software development and performance engineering.At Western Digital, he is instrumental in developing best practices and reference architectures for deploying SSDs in multi-tier enterprise applications, including Tier-1 business critical applications, virtualization, and big data technologies such as Hadoop and NoSQL databases.Previous to Western Digital and his role at SanDisk, he has worked at leading high-technology companies such as VMware, EMC, Informix and Commerce One, and has presented at several industry conferences, such as VMworld, EMCWorld, InterOp etc. Hemant is co-author of the book "Virtualizing Microsoft Tier 1 Applications with VMware vSphere 4" and numerous other technical collaterals.He received a Bachelor of Science in Electrical and Engineering from B.I.T.S., Pilani, India, and an M.B.A. from Santa Clara University.