Monday, January 31, 2011

10 Technologies for 2011 : 6. Virtualization

Virtualization


Virtualization is a large subject.

Imagine buying a computer that could simultaneously run Linux, Mac OS, Windows and Solaris 10 all at the same time.  Or one where deploying a new server is just a few clicks of a mouse.  Or taking 10 different machines that have been aging and are reaching their end of life, and moving the software on them to a single powerful server while still retaining the different flavors of operating systems and support libraries they run on.
             Yes, these things are possible. Indeed these capabilities are transforming corporate Data Centers all over the world.
Although Virtualization is in the news a lot these days, the concept of simulating physical machines through the use of software for the purpose of multiplexing among different types of users and systems is not new at all.
                In fact, as early as 1972, IBM had a fully functional virtual machine as part of an update to the IBM 370 series mainframes. These virtual machines appeared indistinguishable from physical machines for systems and users that ran within them. The physical CPUs of the IBM 370 could be multiplexed among multiple virtual machines, each one running its own operating system and functioning as if it was in control of the underlying hardware.
True Virtualization requires certain enhanced CPU instructions that were not available in the x86 Intel architecture based machines until the 2005.  The lack of these hardware instructions on the dominant server and desktop architecture within corporations did not prevent vendors such as VMware and Parallels to build partial solutions that virtualized hardware and made it practical and affordable.  Since the enhanced x86 instructions have become available vendors have begun gradually adopting them to implement “Full” virtualization using help from hardware.
                VMware, now a division of EMC Corporation is one of the leading vendors in this space. Other vendors of note in the corporate arena include Microsoft with its Virtual Server and Hyper-V products. Hyper-V has been introduced as a key technology in Microsoft Windows Server 2008. In the open source domain, Xen has been at the forefront of the virtualization effort.  The primary Xen mechanism for virtualization requires modification of the operating systems that can be hosted and hence was limited to Linux or Linux derived systems. This however has changed with more recent releases that support Microsoft Windows as well.
                Along with Server based virtualization technologies, virtualization on ordinary PCs and Macs is becoming more and more common as hardware becomes more powerful. Good examples are the VMware Fusion and Parallels desktop for the Mac OS. These products allow the Apple Mac user to run Windows Desktops from within the normal Mac OS environment, even allowing data communication between Mac OS and Windows programs i.e. cut and paste between them. In the Microsoft world, Med-V delivers a Virtual PC that can run an older OS such as Windows XP for legacy compatibility while the user uses Windows 7 as the primary OS. 

               These forms of Server Virtualization where a several guest operating systems run under the control of a "hypervisor" or host operating system, have become quite common in the data center and in peoples homes. In the Data Center for Server based virtualization the attention has shifted to lowering the cost of operating the Virtual Servers (driven chiefly by the cost of the software licenses) and increasingly more capable guest OS instance migration capabilities. Some of the more advanced virtualization platforms available such as EMC (VMWare) ESX based systems and Microsoft Hyper-V allow for live migration of host OS instances. This means that as these instances are moved between different physical machines, they will keep running and maintain network connections. After a few key pilot implementations, it becomes very apparent that the TCO of virtual machines is determined chiefly by the ease with which they can be managed and the density of Virtual instances per Physical machine.  2011 is the year, more CIOs and CTOs will focus on reducing the TCO of their existing platforms and scaling their environments out to support all that they need. Any debate about the need for virtualization in the Data Center is now for all intents and purposes dead.

           Other forms of virtualization have emerged and are chomping at the edges of the Data Center, namely: Application Virtualization and Desktop Virtualization. In Application Virtualization, particularly as manifested in Microsoft's App-V platform (based on an acqusition of SoftGrid), applications are run on the Server with a small footprint "interpreter" on the client. Due to the relatively little work that is being done on the client side, the desktop machine does not have to be at the same scale as the workstations that run the applications entirely locally. Another example of such technology is the XenDesktop from Citrix, that uses similar technologies to run a lightweight "Citrix Reciever" that in turn connects to a XenApp Server from Citrix. Since the Citrix Reciever is very lightweight it can run on a variety of hardware including Macs, PCs, iPhones and iPads. 

        Finally, true Desktop Virtualization literally runs the Desktop environment on big iron servers in the data center and only presents the UI on client machines. With powerful multi-core servers to act as the engines, this form of Virtualization promises to significantly reduce the desktop footprint in the enterprise. Instead of upgrading thousands of desktops every few years along with the large maintenance costs associated with such efforts, these virtual desktops could be upgraded directly in the Data Center while users only see the upgraded functionality through the UI. Although this last approach seems very exciting in its promise, these are early days and problems such as difficulty in playing multimedia and resolving printer driver issues remain as obstacles for widespread adoption.

     So in conclusion, for Server Virtualization, 2011 will be a year of scaling out and reducing TCO for existing Virtualization platforms ( including tough fights between EMC' VMWare and  Microsoft's potentially lower cost but also newer less mature alternatives) while Application Virtualization is likely to be tested out with some key deployments here and there. Desktop Virtualization on a large scale is still a ways off. Early adopters will get their feet wet in 2011, while their more cautious colleagues will want to learn from these early experiences. 






No comments:

Post a Comment