Software may have eaten the world but, thanks to DevOps’ focus on scalability and need for continuous deployment, containers are taking a bite of that software dominance. Containers are a way of packaging applications, or even individual services, in a way that makes them fast and easy to deploy for developers, while at the same time maximizing the performance and utility per server for administrators. That’s a good thing for businesses that need to scale their services at Internet speed, and why they’ve become so popular.
Containers are a great way of running things like web servers, microservices, and other applications that need to scale dynamically. But there’s one kind of application that hasn’t really taken off in containers: the databases that store all those applications’ information (such as transaction histories, user accounts, warehouse inventories, and more). These databases have two essential requirements: permanent, highly available storage; and extremely high I/O performance. Containers, while having many positives, quite often can’t meet those requirements.
Containers and Databases: Advantages
Containers operate differently from traditional virtualization technologies. Instead of isolating different applications into virtual machines running complete operating systems, they split up a single operating system into multiple applications and isolate them from one another. The difference may sound subtle, but its impact is not: containers let you run significantly more applications per server than virtualization does, making better use of hardware. Container servers also only have a single operating system image to maintain with security patches and upgrades, reducing the operational work of keeping things safe and secure. Finally, the applications running in those containers are built to scale and deploy easily, allowing for real-time response to changing business needs.
Containers and Databases: Limitations
One of the most significant issues with containers is that they don’t have any form of permanent storage. Instead, all container applications’ storage is temporary or “ephemeral.” Ephemeral storage makes containerized applications easy to scale and migrate, for sure, but also makes them hard to use for “stateful” applications like databases.
Containers also have a reputation of not having the best performance on I/O intensive applications, due to the way they interact with the server’s file system and their own ephemeral storage. Database I/O operations need to have low latency to provide good performance to applications. But when run in containers, those I/O operations need to travel through many more layers of code (with its inherent delay) before they can actually read or write data to storage.
Containers and Databases: A Powerful Solution
Thankfully, there is a proven way to run high performance databases on containers. Container-Native Storage (CNS) can run on the same hardware that’s running the containers, and combine server hard drives and SSDs into storage that’s persistent and highly available to containerized databases.
Ensuring this storage is not only persistent, but fast, requires the combination of Red Hat CNS and Western Digital SSDs to come into play. Red Hat’s optimized technology makes it easy to deploy a highly-available, scalable storage system for containers. Western Digital’s family of high performance, enterprise-class SSDs provide uncompromising low latency and high bandwidth to allow that storage system to be fast enough to support these demanding database applications.
Learn more
Get the full story on how the combination of Western Digital storage devices and Red Hat Container-Native Storage can run your high performance databases in containers: Download the Principled Technologies Study or view our webinar.