What is Composable Disaggregated Infrastructure?

What is Composable Disaggregated Infrastructure?

The term disaggregation is not new in data centers, particularly for hyperscale organizations who have adopted disaggregation to improve hardware efficiency at scale. Composable Disaggregated Infrastructure takes the concept one step further with a focus on hardware flexibility and responsiveness powered by software, to make IT infrastructure significantly easier to automate.

In this blog I’ll share:

  • Current data center challenges
  • What is Composable Disaggregated Infrastructure
  • How Composable Disaggregated Infrastructure works
  • The role of fabrics
  • Example solution.
data center composable disaggregated infrastructure

Current IT and Data Center Challenges

In its essence, the modern data center is suffering from lack of staff efficiency. IT professionals are wasting their time on manual processes and routine tasks that could be automated. Any change in the underlying technology to meet business demands requires a full interruption of the staff while they manually address the problem. The lack of automation leads to overprovisioning of technology resources, like storage, so that IT doesn’t need to be involved as often with the equipment.

Two recent IDC studies of enterprise data center managers (August 2018, March 2017) revealed this about their organizations:

  • Efficiency: 55% = median people efficiency in enterprise IT because of time spent on routine operations tasks due to lack of automation & number of steps needed to complete tasks (many performed manually).[1]
  • Rigidity: 50% = median technology efficiency in enterprise IT because of silos of constant underutilization that are required to ensure capacity and performance SLAs are met1.

Over provisioning is commonplace for redundancy and resiliency. This creates significant idle hours during which the infrastructure is not servicing any applications.

The result – decreased confidence in the ability of their IT infrastructure to support the business In fact, only 40% felt that the IT infrastructure was able to comply with the stated service-level agreements (SLAs)1.

Regardless of the above, IT still needs to deliver increasingly fast support for rapidly changing business requirements – all the while mitigating risk to the business. As is commonly heard, “data centers must evolve” – not just to reduce CAPEX and OPEX but, equally as important, to address required agility and better support and drive business strategy and the bottom line.

What is Composable Disaggregated Infrastructure?

The term disaggregation is not new in data centers and different vendor groups have been chasing the concept for some time. The concept of Composable Disaggregated Infrastructure (CDI) is to leave the days of a pre-integrated silo of compute, networking and storage and emerge with something that is more flexible, more responsive and significantly easier to automate.

The disaggregation occurs at the hardware level. By creating pools of network, storage and compute technology resources, these resources, through software, can be easily and even automatically provisioned (and deprovisioned) to applications. This allows more control over the real-time allocation of resources so each application receives optimized levels of processing, storage and networking – scaled independent of one another.

How Composable Disaggregated Infrastructure Works

During a recent AllTech Media virtual event, Narayan Venkat, vice president, Marketing for Data Center Systems, tackled composable infrastructure as an emerging innovation. During the event, he shared how disaggregated hardware can be composed, or taken apart, on the fly with a Composable Disaggregated Infrastructure,. The goal of CDI is to deliver greater productivity and agility, improved utilization and faster provisioning while at the same time improving availability and performance.

When a new workload is ready to move into production, instead of purchasing new servers, new networking and a new storage system, the IT operations team provisions the resources dynamically to the workload from an available pool of resources.

Think of CDI as having all the components of the data center on a digital shelf. The automation software then automatically assembles those components dynamically into a complete solution for the workload. As the workload goes further into production, more resources are added dynamically. IT does not need to provision them upfront. Finally, if the workload reaches end of life, the technology components are put back on the digital shelf – to be made available for the next workload that requires them.

The Role of Fabrics

Making a CDI work requires a flexible, yet high performance network with very low latency. NVMe™-over-Fabrics (NVMe-oF™) has emerged as an ideal network for CDI because it can deliver both high performance and low latency while remaining industry standard. It can also run on both traditional fibre channel switches and IP switches.

Server hardware is already relatively composable thanks to hypervisor and container technology. The remaining component, the storage itself, should be a shareable storage system based on either flash for high performance or disk for high capacity. Both options should be accessible over the same network and controlled by the same software.

If you’re not familiar with NVMe-oF, take a moment to read this NVMe-oF explainer blog to understand the various options.

Composable Disaggregated Infrastructure – OpenFlex™ Architecture

Our approach to Composable Disaggregated Infrastructure is the OpenFlex architecture which includes our solution for disaggregating the storage component as well as an Open Composable API.

Our OpenFlex architecture is initially available for flash storage – The OpenFlex F3000 Series Fabric Device, is an NVMe-oF attached flash storage device, ranging in capacity from 15TB to 30TB.[2]

Up to 10 of these F3000 devices can be placed in the OpenFlex E3000 Fabric Enclosure – enabling flash storage scale up by adding blades to the enclosure or adding more full enclosures. Each blade independently connects to the fabric and is controllable by the software. These resources are allocated to and released from applications dynamically and the process can be automated so that IT staff doesn’t need to be physically involved in every move, add or change.

Our Open Composability API is a RESTful API that builds upon existing industry standards such as Redfish® and Swordfish™ and is intended to orchestrate all data center elements including; compute, flash, disk, network, accelerators, and disaggregated memory. The idea is to create a flexible architecture that is not only relevant for today’s data center environments but is also extensible to support emerging future data center elements. Composability should and will likely be an industry-wide effort to make this transition as easy as possible for organizations. That’s why the Open Composability API will be made publicly available to enable and drive vendor neutral solutions.

A Change in the Data Center Requires a Change in Thinking

Change has always been the one constant in the data center. The solution has always been to over-provision resources so that the infrastructure can absorb any changes that occur. The challenge today is that now change happens much more frequently and the organization expects immediate implementation of those changes. IT can no longer afford to over-provision their way through change management. CDI enables an organization to respond to changes almost instantly while at the same time reducing costs, helping IT align better with the business and drive real-world results.

Learn More About CDI and OpenFlex

Join me for an upcoming webinar about the architecture of the OpenFlex product. I will talk about how composability and NVMe-oF address the dynamic nature of today’s IT application environment, the challenges in using server-based NVMe devices and how OpenFlex solutions accelerate workloads and improve IT asset ROI.

[1]“Quantifying Datacenter Inefficiency: Making the Case for Composable Infrastructure”; author: Ashish Nadkarni, March 2017

[2] One TB equals one trillion bytes when referring to storage capacity. Accessible capacity will vary from the stated capacity due to operating environment.

Related Stories

What is the 3-2-1 Backup Strategy?