The cloud hosting model has completely transformed the IT industry, yet not every workload is ideally suited for the public cloud, or will see cost savings there. Here’s what you need to know when examining private cloud vs. public cloud for your data center workloads.

The Benefits of the Public Cloud

Let’s start with why the public cloud took the IT world by storm in the first place. It’s difficult to argue with the benefits of public cloud computing.  The cloud hosting model has completely transformed the IT industry over the past decade and has made it possible to build out your entire IT infrastructure without doing any of the hard work.  You don’t have to buy servers or network gear.  You don’t have to install operating systems or run cables.  You don’t even need a data center.  All you need is a valid credit card number, and with a few clicks, your system is up and running.

The time spent developing new applications is dramatically reduced as well.  There are a wide array of APIs and micro-services that developers can take advantage of, which speed up development time and allow you to add powerful features with a minimal amount of effort.  This translates to reduced costs, shorter times to market, and increased productivity overall.

Private Cloud vs. Public Cloud: Between Value and Cost-Savings

The public cloud is generally sold as a cost-savings technology, but what many companies learned is that it doesn’t always work out that way in practice.  In a recent survey of IT decision makers, one of the biggest concerns with public cloud deployments was keeping the costs under control.  Once you’ve factored in all of the different services, management costs, and metered usage fees, cloud-hosted solutions are often far more expensive than originally anticipated, and harder to keep oversight of.

When people talk about the cost savings of the public cloud, there is an important caveat they tend to forget.  The public cloud can save you a lot of money when it’s applied to the right problems.

The real value of the public cloud has never been about cost savings, at least not directly. The real value is its agility.

The ability to deploy two hundred servers on-demand (or spin up a million cores, like we did) with some mouse clicks is an extremely powerful feature.  It allows you to quickly respond to changing market conditions, without spending a fortune to refit your data center for every emerging change.

However, it’s important to keep in mind that not every business unit operates this way.  There are many systems which remain relatively static, and the extra agility may not be as valuable for permanently installed components.  When you put together your cloud strategy, it’s important to understand where this model adds value and where it does not.

When to Avoid the Public Cloud

Contrary to popular belief, not every workload is ideally suited for the public cloud.  High performance computing is another good example of where if you do enough of certain workloads, it can go from cost effective to cost ineffective.  Cloud based computing resources usually take the form of containers or virtual machines, where system resources, such as CPU, memory, network, and storage are shared between multiple users.  Even the most powerful compute instances are pretty weak compared to what you can get on bare metal.  Some hosting providers offer high-performance tiers on bare metal with fast processors and terabytes of memory, but the annual hosting fees for those systems can easily exceed 6 figures per machine!  On a temporary basis this would be more cost effective than building out the full data center and having it sit idle most of the time, yet this would not be cost effective at all for ongoing projects.

Lastly, petabyte scale storage is not always a great candidate for cloud hosting either.  If your data holdings are in the terabyte range, it’s fine.  The cloud can still be very affordable.  But when you’re measuring your storage requirements in petabytes, public cloud hosting is often the most expensive solution.  If you’re dealing with big data, petabytes of storage, and massive file transfers, the monthly hosting and usage fees can burn through your IT budget pretty fast.

Private Cloud vs. Public Cloud: Infrastructure Costs

Understanding infrastructure costs is key to understanding potential cost savings in the cloud or on-premises.  It’s true that hosting your own hardware can be expensive.  You have to pay for hardware, power, cooling, facilities, and engineers to maintain everything.  Cloud hosting solutions benefit from the economics of scale and the extreme efficiencies that only hyperscale data centers can provide.  But remember, your hosting provider still has to pay operational and capital expenses as well, and they’re not just giving them away for free.  They’re baked into your monthly hosting charges and resold to you at a markup.  For smaller systems, the cost equation usually works out in your favor.  But for larger, permanently installed systems, it may make better financial sense to host them on-premises and you can still use the public for bursting, specific services or complementary applications.

Most companies choose a hybrid cloud deployment for all of these reasons.

Use Case: What a Successful Hybrid Cloud Looks Like

Hybrid cloud isn’t necessarily about separating workloads between private cloud vs. public cloud. Let’s take a closer look at how a hybrid cloud works in a particular application use case where you can take best advantage of each platform.

Suppose you have an application that requires a petabyte or more of storage capacity.  As I mentioned, data repositories this large are generally not a good candidate for cloud hosting due to the cost.  However, there are some very attractive APIs and services in the public cloud that you could use to build your user interface.

In a hybrid cloud deployment, you can have the best of both worlds by hosting your data locally, while the front-end services that provide access to the data are hosted in the cloud.

Remember that uploads to the cloud are generally exempt from bandwidth charges.  This means you can upload your data, process it, retrieve the result, then delete it to avoid the storage fees.  This approach gives you access to the full array of APIs and cloud services while allowing you to maintain control of your data and keep costs to a minimum. It saved our company millions of dollars (you can hear our CIO speak about our hybrid analytics infrastructure).

This type of deployment makes it easier to use a multi-cloud approach as well.  By hosting the data locally, you can migrate your compute resources from one hosting provider to another, without worrying about migrating the data.  The result is an extremely agile multi-cloud environment, where you have access to the widest array of cloud services possible, while cutting costs and protecting yourself from vendor lock-in all at the same time.

Many Ways to Approach a Private Cloud

There are many ways to approach on-premises cloud, and our own deployment and recommendation is the ActiveScaleTM object storage system.  ActiveScale’s S3-compatible interface makes it easy to integrate your cloud-based applications with locally hosted storage.  It provides petabytes of capacity and up to 19 9’s of data durability with extreme cost efficiency — most often a far more economic choice than the cloud and, in some cases, even tape!

If you’re looking to understand hybrid cloud deployments, how to save on costs and improve efficiencies, check out these helpful resources:

• Solution brief: Drive Competitive Advantage With Private Cloud at the Edge

• Video: ESG Perspective on ActiveScale Object Storage System

• Case study: Driving Intelligence in the Smart Factory with Object Storage

 

Mike is a Senior Technologist for Western Digital and specializes in performance tuning and storage optimization for big data customers.