Object storage has been around for a few years now, and some people think object storage is all the same. Not so much. Much of the original design criteria are the same: high scalability, moderate performance, low cost, and ease of management. How these capabilities are implemented is another story. Here are 5 ways ActiveScale™ is better object storage.
5 ways ActiveScale™ is better object storage.
The roots of HGST’s object storage go back to 2008 and these design points were incorporated into today’s ActiveScale object storage. The five technologies that define how ActiveScale achieves the scale, performance, cost and management goals are:
- Dynamic data placement
- Metadata that scales
- Cloud-based storage analytics
- Vertical innovation and integration
- Eternal storage architecture
Dynamic data placement
Dynamic data placement is a flexible way of taking an object and breaking it into chunks and protecting those chunks of data in a manner that provides the best performance, availability and durability. This means it works well when everything is going well, or during system maintenance or problems. If a disk drive, connection or a card fails this is where the real test of your system happens, because all systems fail. When a disk drive fails in a static system, all the drives that support that object – say 18 HDDs – will become unavailable since it needs each element to be operational in a fixed configuration. By contrast, if a drive fails in an ActiveScale system, only the one drive is unavailable and the system will continue using the rest of the good drives. Even if a chassis fails, data can be written at full safety to the remaining chassis available.
You need to know what happens to your data when repairing or maintaining your system. The impact of failure or maintenance in a static design is typically degraded performance.
With dynamic data placement data continues to be written on the available resources so that operations not only continue, but do so unimpeded.
This also applies if you want to add more capacity in a scale-up or scale-out configuration. Some other systems may need to rebalance because their static nature can’t adapt and the system operates in degraded mode. Not so for ActiveScale, additional capacity can be added without degraded performance. That’s better object storage.
Metadata that scales
Another way that object storage is different from traditional storage is the use of metadata to keep track of objects and their chunks of data instead of using a hierarchical structure like file and block storage. Metadata that scales avoids the need to deal with static data placement and the scaling issues it imposes.
What’s the problem with static data? Static data placement allows the system to avoid using metadata for erasure coding, since the addresses for the data chunks can be precisely calculated. However, this undermines one of the key advantages of erasure coding found in object storage- high scalability. The common object storage interface, Amazon’s S3, requires metadata, so systems that don’t need it for addressing, will implement it for compatibility and often end up not quite compatible. You might end up having two versions of metadata to keep current, which can be a challenge in a rapidly growing system.
Cloud-based storage analytics
Management of petabyte-scale storage can also be a challenge. By using cloud-based storage analytics you can more efficiently keep track of all the data and assets in your object store. In many traditional storage management approaches, you manage each rack of storage separately. This works fine for smaller configurations, but moving into petabyte scale storage you need a new toolset to get the job done effectively and efficiently. ActiveScale cloud management is a cloud-based capability included with every ActiveScale system that not only manages a rack of storage, but every rack of storage in the namespace, even if it is geo-dispersed, or in multiple racks. Indeed, it will manage all your ActiveScale namespaces, so you are able to efficiently and effectively manage petabyte scale storage. You will get predictive warnings based on trend analysis of your systems, a deeper understanding of your users and storage to better manage your service level agreements, and the ability to look at conditions now and compare them to conditions say, last quarter, or last year to decide if there is a trend emerging, or a one-time incident that you might treat differently. That’s smarter object storage.
Vertical integration is another aspect of dealing with our deep portfolio of storage components. Since we understand the workloads and device characteristics, we can make informed decisions on the best selection of components to get the job done. We also understand those device characteristics when we design the enclosures to make sure the vibration, cooling and mechanical design will make the most of the underlying storage technology. This is particularly important if you are considering a software-defined storage object storage solution. Many people underestimate the amount of time and effort to design, build and maintain the hardware in their system. Software defined-storage on commodity hardware has already been created for ActiveScale, there’s no reason to reinvent it!
Eternal storage architecture
Finally, the nature of how we manage our data is changing. We don’t want to throw anything away. Now there is an economic way to achieve that goal. ActiveScale is designed as an eternal storage architecture to protect your data for a very long time. This includes capabilities such as graceful decommissioning of old storage, seamless upgrades and extreme scalability. When building an object storage architecture, you should be thinking about how to build the system for the long term, like ActiveScale has done.
Better Object Storage
There’s object storage, and there’s better object storage. If you’d like to find out more about ActiveScale systems and object storage check us out at http://www.hgst.com/products/systems
New to object storage?
Learn why object storage is an alternative storage solution and its role in cost-effectively delivering data at scale. Read my latest article Using Object-based Storage to Replace File-Based NAS Architectures on Data Center Knowledge.