In the past several years our industry has seen the emergence of the “Software-Defined” concept. Where previously resource management, policy, and data management were embedded within hardware products, both Software-Defined Networking and Software-Defined Storage advocated for policies of management to be separated from the hardware components themselves, enabling software to then coordinate access to a variety of hardware capabilities in a manner that is optimal for the applications and the data center itself. As data centers scale to hundreds of thousands of machines and multiple geographies, software-defined technologies help users and administrators to harness available resources in a holistic but dynamic manner across massive scale.
While the software-defined approach was being developed for network and storage technologies, the underlying memory technologies in the data center have been undergoing a transformation of their own. Flash has already transformed the data center, improving application performance and reducing infrastructure costs through greater server consolidation. As flash pushes further into lower cost (and in some cases lower endurance), the flash tier itself is bifurcating as different flash products based on MLC (Multi-Layer Cell) and TLC (Triple Layer Cell) are driven towards write heavy or read heavy usages and architected into forms of hybrid storage for mixed usages. Also, the role of DRAM is changing, with hybrids of DRAM/Flash being used to offset requirements for more expensive DRAM. Finally, with NVDIMMs and the promise of future technologies such as ReRAM or Phase Change Memory, we are seeing excitement build for a new class of memory; persistent memory, which has the persistence capabilities of storage and access performance similar to memory.
Given this richness of media technologies, we now have the ability to create systems and data center solutions which combine a variety of memory types to accelerate applications, reduce power, improve server consolidation, and more. We believe these trends will drive a new set of software abstractions for these systems which will emerge as software-defined memory – a software driven approach to optimizing memory of all types in the data center.
We at SanDisk® have developed a suite of software technologies that demonstrate the power of software-defined memory. SanDisk’s software-defined memory includes our Non Volatile Memory File System (NVMFS) and our Auto-Commit Memory (ACM) software and hardware for byte addressable persistent memory. Together, NVMFS and ACM combine to tier multiple memory sources, from Flash to Persistent Memory, with both transparent acceleration for legacy applications and optimal integration interfaces for optimized applications. We will be walking visitors through these software-defined memory technologies and concepts at Oracle OpenWorld this week, and look forward to future advances in the industry to drive this category of memory optimized solutions.
Nisha Talagala is a Fellow at SanDisk, where she works on innovation in non volatile memory technologies and applications and leads the Advanced Technology Group.Nisha has more than 10 years of expertise in software development, distributed systems, storage and I/O solutions, and non-volatile memory. She has worked as technology lead for server flash at Intel - where she led server platform non volatile memory technology development, storage-memory convergence, and partnerships. Prior to Intel, Nisha was the CTO of Gear6, where she designed and built clustered computing caches for high performance I/O environments. Nisha also served at Sun Microsystems, where she developed storage and I/O solutions and worked on file systems.Nisha earned her PhD at UC Berkeley where she did research on clusters and distributed storage. Nisha holds more than 30 patents in distributed systems, networking, storage, performance and non-volatile memory.