Enterprise Storage: 6 Predictions for 2016
It’s tough predicting the future, but somehow people always ask me to do just that. Of course, the shorter the time you’re predicting, the easier it gets. So it’s good I’m being asked what my personal take is on enterprise storage and flash in 2016. Next year? I mean – how hard can that be ?
In some ways, 2016 is the calm before the storm, in anticipation of 2017 and the biggest architectural changes in 20 or 30 years. That’s when we’ll see Rack Scale Architectures, disaggregation, in-rack fabrics, pooled storage, and huge NV main memory deployments all lumped together. But that’s another topic for another day. 2016 does have some big things happening anyway. Here are my top six that you should be aware of:
1. All-Flash Data Center?
Will 2016 be the year people stop buying 15K and 10K hard drives? The economic analysis points in that direction. Between server utilization, hardware and software license consolidation, and MTBF you should really already be deploying SSDs, not 15k hard drives. Its just math. In many instances, it’s not just long term TCO that makes flash more cost efficient – it’s actual purchase price of equipment that’s cheaper with SSDs. Having a better solution is just a bonus. Hard Disk Drives will be around for a long time, but 2016 is when they should be shifting to archival. And the role of flash as primary storage should be firmly established. But who knows if 10K and 15K hard drives will see their end– that’s up to you, the end user, though I know old habits die hard.
2. Fabrics: Everything Connected
The biggest change in 2016 is probably cost effective 100G Ethernet. Hey – wait a minute – that’s not even storage. Let me tell you, it will change storage. It just so happens that a single 25G lane (100G = 4x 25G lanes) is about the same bandwidth as a x4 PCIe gen3 SSD’s interface (24G). And NVMe happens to speed the IO stack vs. standard SCSI / block stack a similar amount to the overhead from Ethernet. That opens up the door for RDMA based NVMe over Ethernet (NVMeOE) with about the same performance as DAS. And there is legitimately enough bandwidth from 100G that you can efficiently move both storage and network traffic over one cable.
There is an amazing amount of work proceeding in hidden labs and startups, and it will hit fast. Earlier this year, Amazon Web Services (AWS) bought a stealth startup called Annapurna Labs that specializes in server networking smart NICs and NVMeOE.
3. Software Defined Storage (SDS)
If you have not already, 2016 may very well be the year you deploy SDS and datacenter orchestration layers like OpenStack, which seems to have hit critical mass. We’re seeing a lot of converged systems relying on flash for performance being deployed in 2016 with flash performance compensating for the overhead of better management and abstraction. And VMware’s recently released Virtual SAN 6.0 finally allows performance critical apps to take advantage of flash. All of this sets the stage for even more server consolidation enabled by flash.
4. NVMe: A Building Year
NVMe will continue to make strides, but reality is, until there are lots of drive bays to plug them into, it’s really hard to deploy NVMe. Next year we’ll see many more servers available with NVMe drive bays. In the mean time, SATA and SAS SSDs are selling in huge volume and have become a surprisingly resilient workhorse of the compute efficient utility compute crowd (Hyperscale, Public and Private cloud).
5. Memory: (Yes – the Load / Store Kind)
Maybe – just maybe – we’ll see “Flash as Memory” – that is 10’s to 100’s of TBs per node on certain open source databases. Data structures will be directly accessed on a cache line basis as very slow memory.
Before you choke on your coffee and say “how stupid is that?” – the value is in eliminating most of the IO stack. In a heavily IO-bound application, like a database, a server can grind to a halt. I often hear about great utilization – 80% to 90% CPU utilization. The problem is that 75% to 85% of the cycles are managing the IO. Only 10% goes to the actual database. Now if it was running in-memory, even on slow stuff, it will stall a lot on accesses, but all of a sudden that same server can be using 30% to 40% of the CPU on the actual database. That sounds a lot like 4x work per server to me. And once we move to actual pools of low latency NV load store main memory in late 2017 through 2018, I’m expecting we’ll see 50x more work from specific applications on a single server. 50x! That’s not a typo. And its worth pointing out that many enterprise high value apps are being re-written right now to run in-memory. OK – back to your coffee…
6. More Hyperscale, Less Enterprise, Sadly
About 60% of the flash in datacenters is going into “hyperscale” datacenters.[i] You know, the Googles, Amazons, Facebooks, and Microsoft Azures of the world… Only 40% of flash consumption is going into enterprise, mostly through OEMs. Hyperscale environments need to be extremely cost-efficient. Yet, they are more than happy to spend on flash SSD solutions. In fact some of those are moving to 100% flash – no hard disks. Why do you think their services are priced so reasonably? That’s because flash enables them to be more cost effective, getting more work / $ out of their entire IT spend.
What’s more surprising is that in 2016 we expect to see hyperscale consume closer to 70% of the flash in data centers. Only 30% will go into traditional enterprise. Come on guys – you can’t let them beat you! Seriously, traditional enterprise is simply not enjoying the advantages of flash and SSDs that they should – you’re wasting money. Again, its about spreadsheet economics – aggregate work/$ spent, work/server, server utilization, hardware and software license consolidation, simplified administration, improved uptime, SLAs & performance under failure, as well as dramatically better operations. You can be brave and change the trend and build a better datacenter at the same time. So prove me wrong in 2016!
From all-flash data centers’ improved economics, to new high bandwidth fabrics, new types and uses of memories for re-written major applications, software defined storage, and growth in NVMe – maybe it will be a big year after all, even as we wait for 2017.
The interesting question is – are you going to adopt these technologies? That I can’t predict – it’s up to you.
[i] Based on 2014/2015 SanDisk® and Forward Insights Data