Desktop Virtualization Infrastructure (VDI), is a way of accessing desktops running remotely in a data center by using a protocol. There are many vendors in this space providing various solutions. Two important areas where companies deliver different solutions in VDI are network and storage.
In this blog post I would like to talk about the evolution of VDI and share how flash plays an important role in VDI solutions.
I would categorize the evolution of VDI in three different phases:
This was an early stage and basic approach to VDI, which was not broadly adopted by Enterprises. Companies were still getting familiar with the solution and as a result kept VDI to non-critical applications. The adoption was mostly for call center applications virtualizing one application per desktop. At this stage the footprint and configuration of the desktops were pretty small. So running a few desktops (Virtual Machines) in a data center did not consume many resources (computing, storage and network).
In these deployments there were no huge storage IO, throughput or latency demands. Perhaps spinning media storage was good enough to serve the user need and experience.
VDI 1.0 was the first attempt to apply breakthrough virtualization technology to desktop computing, although the average desktop VM costs were similar to those of server workload VMs.
This is the current generation of VDI which started about 2-3 years ago. This phase will likely continue for a few more years. VDI 2.0 should also become the baseline for next-generation VDI. I will share some of my thoughts about it in the next section.
As enterprises evaluated VDI 1.0 and realized the benefits of security, accessibility, flexibility and manageability of VDI over physical desktops, the adoption of VDI became more mainstream and this trend will continue.
With VDI, organizations adopted more types of users alongside many applications. However, this created a problem at the infrastructure layer. Problems like boot storm, desktop patching, fast deployment, and user experience became critical for success. The desktop footprint is bigger than VDI 1.0 generation yet it’s still not exceptionally large from the individual user point of view.
From a storage perspective, thousands of IOPS became a de-facto standard for these desktops and magnetic media couldn’t cope with these new I/O demands. There were attempts to optimize storage performance by using a SAN consisting of 100s of magnetic media. But such solutions are not cost-effective nor efficient as VDI demands different types of IO. There is an adoption of all-flash arrays in this space, which is quite successful, but cost is still a concern.
New architectures helped organizations to adopt hyper-converged solutions (bringing storage and compute together) where flash storage is a default element that enables to address the storage performance needs. Some solutions use flash for caching, while in other solutions the entire storage stack is designed using different types of flash storage based on the application need e.g. VMware All Flash Virtual SAN. Enterprises are now either adopting all flash and/or partial flash deployment using this hyper-converged approach. And I see this trend continuing further in VDI 3.0.
To summarize, VDI 2.0 expanded the scope of desktop types, while delivering acceptable end user experience. Using innovative infrastructure approaches enterprises have been able to keep the average cost per desktop lower than before. I have written several blogs on different use cases of flash and VDI 2.0 and you can read them on this blog. Here are a few recommended topics to explore: VDI workloads on VMware Virtual SAN,, VDI Bootstorm testing using ULLtraDIMM, VDI Pool Creation using ULLtraDIMM and VDI Desktop Recompose testing.
As VDI 2.0 continues to be deployed in enterprises and is more mainstream, high-end workstation virtualization evaluation and proof of concept (POC) have also commenced. There are lot of commonalities that exist between VDI 2.0 and VDI 3.0. Many of VDI 3.0 aspects are backporting to VDI 2.0 and improving user experience. Flash is having a key role in shaping this development.
There are two elements that I see as unique to VDI 3.0 virtualization: all-flash storage deployment for desktop capacity and performance, and the element of “graphics” in addition to the existing compute, network and storage. With the inclusion of graphics, storage is becoming even more critical from a deployment perspective.
These two elements are not only helping users of VDI 2.0 but rather expanding the opportunity to include high-end desktops such as engineering or design workstations. Two years ago, it was beyond imagination to think of virtualizing such monster workstations. However, this is becoming reality. In fact, it is possible do this to a great extent already today using flash. Without flash storage such implementation would not be possible.
Furthermore, storage data services like de-duplication, thin provisioning and compression help keep average desktop costs very competitive even when using all-flash storage deployments.
VDI 3.0 promises to offer to the most challenging desktop use cases acceptable performance and competitive costs.
VDI is going to be deployed more and more to help companies gain cost and management efficiencies, and flash storage will be necessary for its success. At SanDisk®, we have many hardware and software solutions addressing VDI 2.0 and VDI 3.0 needs and we’re thrilled to be part of this wonderful ongoing journey.
Share Your Feedback
I have shared some thoughts and would like to listen to your opinion as well! Let me know about your VDI experience and thoughts as to its evolution in the comments below. You can also reach me at email@example.com
Biswapati brings over a decade of experience in the IT industry and has been involved in the virtualization industry for more than 8 years.