In the coming series of blog posts, I will examine various other scenarios where SSDs can benefit VDI and virtualized environment workloads, such as Swap to Host Cache, VDI Boot Storm and VDI Admin Operations. These blog posts will be followed by detailed white papers to provide helpful guidelines and technical considerations for how to deploy SSDs to achieve greatest performance benefits and cost efficiencies in virtualized environments.
VMware Swap to Host Cache Experiment Using SanDisk SSDs
In this blog post I wlll demonstrate the benefits of including SanDisk SSDs to significantly accelerate virtualization performance, and in particular when VMware administrators are striving to increase VM density while maintaining application service level agreements (SLAs).
Over-committing memory (i.e. when the total memory utilized by VMs running on a vSphere host exceeds the physical memory on that host) is a common practice in VMware environments. VMware provides several advanced memory management technologies such as Transparent Page Sharing (TPS), Ballooning, Compression and Memory Swapping to manage memory over-commitment. When memory swapping occurs, the impact of swapping is many fold, unlike physical environment, because many virtual machines (VM) are running on the host. Applications running in each VM will degrade drastically versus in a physical host case, where only one application will suffer.
Over-committing memory definitely provides opportunity for the VMware administrator to increase VM density and reduce the cost per VM. However, if application SLAs are not met, such a feature will not add much value to the deployment strategy. The silver bullet would be to over-commit memory by adding more VMs as much as possible, yet control application performance degradation such that applications’ SLAs are still met. This dual benefit of increased VM density and meeting application SLAs at the same time will improve the Total Cost of ownership (TCO) and Return on investment (ROI) in virtualized environment.
VMware vSphere’s “Swap to Host Cache” feature can help you with addressing exactly this need. Using SanDisk SSDs as memory swapping area for all these VMs will make sure that memory swapping is fast enough not to severely impact the application performance, and help increasing VM density. Note that “Swap to Host Cache” is an optional feature that can only be configured with SSDs and is not permitted with HDDs.
The below diagram depicts the difference between Traditional HDD swap versus Swap to Host Cache using SSD drive.
We, at SanDisk, carried out an experiment to validate this feature and studied the impact of configuring and not configuring this optional feature.
We carried out this experiment by running DVD store SQL v2.1 workload, an online e-commerce load generator tool, inside VMs. We generated artificial memory pressure in an ESXi host by running another VM named “memtest ISO” which consumes all the memory assigned to it. This resulted in memory swapping on the host, and then we measured the impact on Operations per Minute (OPM) of DVD store SQL v2.1 workload (similar to Transaction per Minute for OLTP workload) running inside the VMs in the same host.
Further we measured how OPM number decreases as memory pressure keeps on growing in the host. We saw that if the “Swap to Host Cache” option is not configured, application performance degradation is severe.
The graph below shows the impact on OPM when memory overcommitment is at 3.4x. This overcommitment calculation was carried out using following formula:
As memory pressure increased, the “Swap to Host Cache” becomes critical to reduce the over-commitment impact. The graph shows the OPM value impact for configuring and not configuring “Swap to Host Cache” scenario. It can be seen that when Swap to Host Cache configured the application performance is 4X better than not configuring it.
This proves that configuring VMware vSphere’s “Swap to Host Cache” with SanDisk SSDs can control application performance degradation, caused by swapping, to a great extent even under significant memory pressure.
Though this is carried out in a single host, we can easily extrapolate these results to a clustered environment (many hosts). For the storage administrator this means that you leverage memory over-commitment to increase VM density and still maintain overall application performance at a given SLA level.
From a business perspective, the cost per VM can be significantly reduced by configuring this feature, resulting in both CapEx and OpEx savings.
To learn more about SanDisk solutions for virtualization and VDI workloads visit our website. If you have any questions, you can reach me at biswapati.Bhattacharjee/at/sandiskoneblog.wpengine.com, or join the conversation on Twitter with @SanDiskDataCtr
Biswapati brings over 12 years of experience in the IT industry. He has been involved with the virtualization industry for more than 8 years and held different roles in Quality Engineering, Performance Benchmarking, Pre-sales Technical and Solution Architecting.At SanDisk, he is responsible for building solutions, reference architecture, deployment guides, best practices for enterprise applications (Tire-I and II) using Enterprise SSD, ULLtraDIMM and Flash-based software products in virtualization platform.Prior to SanDisk, Biswapati was contracting with VMware and was part of VMware Alliances team. During this time he worked closely with many VMware ISV partners in the areas of Desktop, Server, BCDR and Cloud solutions for performance and functional validation of ISV products. Biswapati was a speaker in VMworld 2013 session for “Application performance in VDI environment.He received a Bachelor of Engineering from National Institute of Technology, India.