Two Lies and a Truth: Server-Side Caching 

Two Lies and a Truth: Server-Side Caching 

Recently I did a webinar about server-side caching using Flash. We tried to have a little fun with it by naming it after the game “Two Lies and a Truth”. This is a common game used at meetings where people need some kind of an icebreaker to quickly get to know each other. You tell two lies about yourself and one truthful statement, and people try and figure out which one is the true one. It can be fun! It also seems to apply to flash caching because there is quite a bit of confusion floating around about flash in general and about caching in particular.

So let’s play the home version!

  • Write-back caching is always faster than write-through (AKA read-only) caching
  • Write-back caching puts your data at risk
  • Application performance with caching depends on several factors. Most importantly, it depends on the characteristics of your data set including your back-end storage performance, the size of your cache, the performance of your flash device, the amount of RAM, data block size, and data locality.
  • It Depends

OK, as IT quizzes go, that’s not very hard. Any answer with “It depends” will always be true. And any answer with “always” or “never” will always be false. Including this one. Damn tautologies.

So why are they wrong?

Shouldn’t write-back caching always be faster?

Not really. Like any other performance situation, if you speed something up that isn’t a bottleneck, you don’t gain anything. Kind of like speeding up the caboose on a train. You’re not going any faster than that engine up front. If your application isn’t being delayed waiting for writes – for example if it’s a web server that doesn’t do many writes – then speeding up writes won’t help you. And there is overhead with write caching. So it is theoretically possible that a write-back cache could even slow down a read-intensive workload.

Know thyself. Know thy workload. Don’t speed up things that aren’t slowing you down.

OK, but how about that data at risk thing? If I’m caching writes, doesn’t that mean that at some point that write data is ONLY on the SSD? What happens if the SSD dies?

Well yes, SSDs can also die. But not really any more than hard drives. In fact, without moving parts, they can be far more reliable than their spinning spindle counterpart. And we’ve been comfortable with dying hard drives for a long time. Why are we comfortable with dying hard drives? Because we RAID them. And you’ll do the same thing with the SSDs you use for write-back caching. If one SSD dies, the data is still safe on the other one. You’ll need to stop caching and flush your dirty data (commit the cached data to permanent storage) to ensure the data is safe. But it’s there. A properly managed write-cache environment puts your data at no more risk than normal operations.

So what’s the point? The point is that if your application IS write-intensive, and your storage isn’t processing those writes fast enough, then write-back caching might be a really good solution for you. But protect your data using RAID for redundancy.

Upcoming Webinar

And don’t miss out on August 20th as Rich Peterson discuses removing VMware storage bottlenecks with server-tier flash – register for his webinar here. You’ll learn how new technologies enable solid-state storage to be deployed in the server, as a complement to existing storage, delivering flash memory performance at a far lower cost than a storage upgrade.

Take the opportunity to learn more about caching and caching implementation by streaming my last webinar. You’ll see how to gain some surprising benefits when write-back caching is done right.


Related Stories

What is the 3-2-1 Backup Strategy?