The OpenStack cloud orchestration OS is being adopted more and more to run private clouds. 451 Research predicts OpenStack revenue will reach $3.3B by 2018. You can already see large-scale enterprises like Comcast, Bloomberg and Best Buy using it in some capacity. At the same time, Ceph, an open source-based distributed, scale-out storage system that provides unified storage – block storage, object storage and file storage — has become the de-facto storage backend for many OpenStack deployments:

OpenStack Survey
OpenStack Block Storage (Cinder) driver usage – all deployments (line graph) and large production clouds of over 1,000 cores (pie graph), courtesy of OpenStack user survey, October 2015

The dominance of Ceph is no accident. Both Ceph and OpenStack share the same open source roots providing a great deal of vendor independence and flexibility. Ceph is very versatile which serves the wide array of use cases that are deployed in OpenStack, be it Cinder Block Storage, Swift Object Storage, Glance Image Store and Manila File Storage. All of them served by a single storage system, which makes it easy to manage and easy to deploy.

InfiniFlash System – Custom fit for Ceph and OpenStack

Early last year SanDisk® introduced a new class of all-flash storage systems – the award-winning InfiniFlash system – and pioneered a new industry category termed by IDC as “Big Data Flash”.

This next-generation storage platform offers 5x the density, 50x the performance and 4x the reliability, while consuming 80 percent less power than traditional HDD based arrays—all with breakthrough economics, starting at less than $1 per gigabyte (GB) before compression or de-duplication technologies.

The challenge with software-defined storage systems, because they may be used on a wide range of hardware, is that they don’t necessarily work well on any specific hardware platform.   Ceph has numerous tuning configurables that can make huge differences in performance – in some cases 5x to 6x improvement when configured and tuned right for a particular platform. Getting this configuration right can be arduous and time-consuming. But customers can be assured that with InfiniFlash System our auto-tuning tools have been developed to deliver the best-configured and tuned software right out of the box.

InfiniFlash and Ceph
InfiniFlash and Ceph

Let me walk you through the benefits of InfiniFlash, custom created for Ceph and OpenStack.

Decoupling Elements

Separating out (dis-aggregation) the storage, compute and networking enables an architecture perfectly suited for the versatility of Ceph. By design, InfiniFlash is not engineered with CPU cores inside the storage unit for computation purposes. Storage compute is disaggregated from storage capacity, so that one can choose exactly the right amount of storage compute required for their workloads – no more, no less. As such, more CPU cores are required for high performance small-block Cinder workloads, by contrast, bandwidth-driven large Object Swift workloads can get by with low CPU cores. In addition, as users scale the cluster, they can scale the storage compute and storage capacity independently, making for a perfectly balanced cluster and delivering a greater cost advantage for large-scale cluster deployments.

InfiniFlash All-Flash Storage Cluster
InfiniFlash All-Flash Storage Cluster

Ceph Performance – Over a Million IOPS …

Early users of Ceph would have observed that with all its versatility, Ceph might not be a solution for applications that require high performance. Some have argued that in order to architect the system to serve multiple protocols, Ceph had to use multiple abstractions which sacrifices performance. But that has changed. SanDisk has worked for over two years, along with the rest of the community, to optimize Ceph performance specifically when it gets deployed on an all-flash storage system.

Over the past two years, many performance optimizations have been made to the open source code such as improved parallel processing and better granular lock mechanisms. As a result, Ceph can now achieve read IO performance that surpasses many storage systems. A single unit of InfiniFlash coupled with Red Hat Ceph Storage software can provide over a million random read IOPS, at 4K block size, which is mostly used for the Cinder block storage.

Furthermore, we have observed that Ceph and InfiniFlash can scale almost linearly with the addition of each node. With two InfiniFlash nodes, we are able to achieve Ceph performance of over 1.8 million IOPS for the same 4K-sized Cinder block workloads. Scaling with performance is an essential ingredient of any good OpenStack storage system.

Random read workload on a single unit InfiniFlash (512TB)
Random read workload on a single unit InfiniFlash (512TB)

Lower Costs Than Ever Before

SanDisk is one of a handful of vendors in the world with its own flash Fab and with it come the advantages of a fully vertically integrated solution and stack. With InfiniFlash we are able to bring to market an all-flash storage system at a breakthrough cost of less than $1/GB (raw capacity), before any benefits of dedupe and compression are applied. To that, also add the savings of reduced OPEX that comes with flash, like lowered power and cooling, reduced maintenance, reduced failure rates etc.

But the real savings are realized from InfiniFlash’s innovative platform design that is well suited for OpenStack Ceph deployments:

  • Lowering the cost of servers used in the cluster: InfiniFlash high density results in fewer Ceph OSDs per capacity and thereby requires much less CPU compared to sparser SSDs or HDDs.
  • Superior reliability requires less hardware: With 10x higher reliability (1.5 million hours of MTBF) * than a typical HDD node, you no longer require three full copies as typically used on HDD nodes. Many customers are able to achieve higher availability with two copies on an InfiniFlash system than with a three copy HDD node deployment. And, recovery/rebuild times are 7x faster, which further reduces the need for more copies.
  • Erasure Coding possible for active object workloads: InfiniFlash enables storage efficient technologies like Erasure Coding possible for active object workloads. It’s no longer just applicable for passive archives like traditionally used in HDD nodes. This enables the storage utilization to be cut down by almost a third without impacting performance. The days of using 3 or even 2 copies are gone for object workloads.

New IT Demands Require New Thinking

With the new economics of InfiniFlash, flash is no longer just for top-tier workloads but a cost-efficient solution across all storage tiers, uniquely suited for OpenStack Ceph deployments. Driven by IT keeping pace with business demands to deliver services, private and hybrid clouds continue to grow and gain adoption. These new cloud architectures place increasing demands on IT infrastructure – storage in particular –to be highly scalable, high performance, more cost effective, and open. InfiniFlash with Ceph meets these demands with cost-effective performance.

We’re thrilled to work with Red Hat to set the bar even higher for next-generation, scale-out storage. Learn more about our partnership and how together, we help organizations accelerate the transition to modern IT infrastructures, on the Red Hat Storage blog.


* Annual Failure Rate (AFR) as compared to HDDs. InfiniFlash AFR based on internal testing. Results available upon request.

Venkat has over 20 years of experience in the IT industry with extensive expertise in Enterprise Storage systems and Enterprise Software.