Monday, January 31, 2011

10 Technologies for 2011 : 6. Virtualization

Virtualization


Virtualization is a large subject.

Imagine buying a computer that could simultaneously run Linux, Mac OS, Windows and Solaris 10 all at the same time.  Or one where deploying a new server is just a few clicks of a mouse.  Or taking 10 different machines that have been aging and are reaching their end of life, and moving the software on them to a single powerful server while still retaining the different flavors of operating systems and support libraries they run on.
             Yes, these things are possible. Indeed these capabilities are transforming corporate Data Centers all over the world.
Although Virtualization is in the news a lot these days, the concept of simulating physical machines through the use of software for the purpose of multiplexing among different types of users and systems is not new at all.
                In fact, as early as 1972, IBM had a fully functional virtual machine as part of an update to the IBM 370 series mainframes. These virtual machines appeared indistinguishable from physical machines for systems and users that ran within them. The physical CPUs of the IBM 370 could be multiplexed among multiple virtual machines, each one running its own operating system and functioning as if it was in control of the underlying hardware.
True Virtualization requires certain enhanced CPU instructions that were not available in the x86 Intel architecture based machines until the 2005.  The lack of these hardware instructions on the dominant server and desktop architecture within corporations did not prevent vendors such as VMware and Parallels to build partial solutions that virtualized hardware and made it practical and affordable.  Since the enhanced x86 instructions have become available vendors have begun gradually adopting them to implement “Full” virtualization using help from hardware.
                VMware, now a division of EMC Corporation is one of the leading vendors in this space. Other vendors of note in the corporate arena include Microsoft with its Virtual Server and Hyper-V products. Hyper-V has been introduced as a key technology in Microsoft Windows Server 2008. In the open source domain, Xen has been at the forefront of the virtualization effort.  The primary Xen mechanism for virtualization requires modification of the operating systems that can be hosted and hence was limited to Linux or Linux derived systems. This however has changed with more recent releases that support Microsoft Windows as well.
                Along with Server based virtualization technologies, virtualization on ordinary PCs and Macs is becoming more and more common as hardware becomes more powerful. Good examples are the VMware Fusion and Parallels desktop for the Mac OS. These products allow the Apple Mac user to run Windows Desktops from within the normal Mac OS environment, even allowing data communication between Mac OS and Windows programs i.e. cut and paste between them. In the Microsoft world, Med-V delivers a Virtual PC that can run an older OS such as Windows XP for legacy compatibility while the user uses Windows 7 as the primary OS. 

               These forms of Server Virtualization where a several guest operating systems run under the control of a "hypervisor" or host operating system, have become quite common in the data center and in peoples homes. In the Data Center for Server based virtualization the attention has shifted to lowering the cost of operating the Virtual Servers (driven chiefly by the cost of the software licenses) and increasingly more capable guest OS instance migration capabilities. Some of the more advanced virtualization platforms available such as EMC (VMWare) ESX based systems and Microsoft Hyper-V allow for live migration of host OS instances. This means that as these instances are moved between different physical machines, they will keep running and maintain network connections. After a few key pilot implementations, it becomes very apparent that the TCO of virtual machines is determined chiefly by the ease with which they can be managed and the density of Virtual instances per Physical machine.  2011 is the year, more CIOs and CTOs will focus on reducing the TCO of their existing platforms and scaling their environments out to support all that they need. Any debate about the need for virtualization in the Data Center is now for all intents and purposes dead.

           Other forms of virtualization have emerged and are chomping at the edges of the Data Center, namely: Application Virtualization and Desktop Virtualization. In Application Virtualization, particularly as manifested in Microsoft's App-V platform (based on an acqusition of SoftGrid), applications are run on the Server with a small footprint "interpreter" on the client. Due to the relatively little work that is being done on the client side, the desktop machine does not have to be at the same scale as the workstations that run the applications entirely locally. Another example of such technology is the XenDesktop from Citrix, that uses similar technologies to run a lightweight "Citrix Reciever" that in turn connects to a XenApp Server from Citrix. Since the Citrix Reciever is very lightweight it can run on a variety of hardware including Macs, PCs, iPhones and iPads. 

        Finally, true Desktop Virtualization literally runs the Desktop environment on big iron servers in the data center and only presents the UI on client machines. With powerful multi-core servers to act as the engines, this form of Virtualization promises to significantly reduce the desktop footprint in the enterprise. Instead of upgrading thousands of desktops every few years along with the large maintenance costs associated with such efforts, these virtual desktops could be upgraded directly in the Data Center while users only see the upgraded functionality through the UI. Although this last approach seems very exciting in its promise, these are early days and problems such as difficulty in playing multimedia and resolving printer driver issues remain as obstacles for widespread adoption.

     So in conclusion, for Server Virtualization, 2011 will be a year of scaling out and reducing TCO for existing Virtualization platforms ( including tough fights between EMC' VMWare and  Microsoft's potentially lower cost but also newer less mature alternatives) while Application Virtualization is likely to be tested out with some key deployments here and there. Desktop Virtualization on a large scale is still a ways off. Early adopters will get their feet wet in 2011, while their more cautious colleagues will want to learn from these early experiences. 






Friday, January 28, 2011

10 Technologies for 2011 : 5. Unified Communications

Unified Communications


On your way to work you first check your corporate email, then your cell phone's voice messages, then perhaps your personal email. When you get to your office you now proceed to check your corporate voice messages, then scan any incoming fax messages, before signing into your instant messaging service. Thats six different sources of information for you and six different ways people may try to contact you. Every now and then you forget about one of these channels only to discover to your horror that a critical message private or corporate from your boss or important business contact got lost between requests from social networks and your spouse trying to reach you in the middle of a busy day.

It does not have to be this way!  email, fax, voice, offline instant messages can all be delivered into one universal inbox and retrieved from anywhere with Unified Communications. This rapidly developing area of technology seeks to transcode all of these kinds of messages into one digital form. So you can get your corporate voicemail in your email, transcoded into some form like MP3 or WAV files that can be played back on most smartphones.
Incoming fax can be converted to a PDF file and attached and sent to your email client. Extending this further out you could be using a video client on your desktop, cell phone or conference room, or using a corporate instant messaging platform to create a on the fly audio conference.

Besides the advantage of having one unified inbox, Unified Communications also creates powerful presence capabilities. Since the servers that house your universal inbox has so many different channels feeding into it, it knows which channels you are active on at any given time, so it can notify any sender of the best channel for reaching you. Even better you can specify which channels make the most sense for you at any time. Perhaps when you drive you do not like to get instant message notifications. Perhaps you rather people send you email when you are in a meeting rather than call you.

Once again CISCO and Microsoft are frenemies in the corporate Unified Communications space. They have a good interoperability story (because they have to, given each company has a strong installed base). However each one is approaching it from different directions. CISCO from a hardware vantage point(its core and edge routers are everywhere) steadily adding software capabilities. Microsoft from the ubiquity of MS Office, MS Sharepoint, MS Outlook and MS Office Communicator (now Lync).

Making Unified Communications happen is not a one quarter activity. It may very well be a multi-year roadmap with clearly articulated goals. Perhaps the first two quarters are dedicated to getting corporate voice mail and email in a single inbox. Future quarters could be dedicated to getting presence extended to instant messaging on all platforms in use etc.

Sunday, January 23, 2011

10 Technologies for 2011: 4. Private Clouds

Private Clouds
As the hype over Cloud Computing settles down and the Cloud becomes just one tool in the toolset available to a CIO or a CTO, there are specific flavors of the Cloud that are gaining favor. While originally the term Cloud Computing meant using CPU cycles in the cloud (Externally owned off premise network, usually depicted by a cloud in network diagrams) to accomplish compute intensive tasks, the term was co-opted by Marketing departments and broadened to include everything from Software as a Service to Plain old Application Hosting.


As IT management internalized what Cloud Computing meant for their companies, they began to question which capabilities their IT departments were truly Core Competencies and which were not. Most of them began to realize that for most small to medium sized companies maintaining their own Data Center was not a Core Competency.

The economics of Data Centers with their heavy initial investment and expensive care and feeding greatly benefit from scale. The larger the Data Center operation as indicated by internal or external customer demand, the easier it is to amortize capital and operating expenses needed for running it. You may need 3 people to run even the smallest of  Data Centers due to availability needs and specialized skills. However 5 people may be able to run one that is 10 times as big due to automation that becomes viable at that larger scale. So large Cloud Computing providers such as Amazon, Microsoft, Google and increasingly RackSpace can get really good at running Data Centers reliably, securely (more on that later) and in a way that allows customers to get environments stood up in a matter of hours, sometimes even minutes.

For very large corporations, one can argue, have the scale necessary to run Data Centers themselves. I think CIOs and CTOs at small and medium companies will increasingly come to the realization a specialized firm can do this better and potentially cheaper.

Most of the attention within IT has been focussed on applications that are exposed to the outside world: Web Applications, Content Applications etc. However as IT management embraces the idea that if security and reliability needs can be met, external providers are just as good if not better than internal providers, they realize that corporate applications like Finance, HR and LOB applications could potentially run on 3rd Party Private Clouds. These are sometimes referred to as "VPN (Virtual Private Network)" clouds. These are networks hosted by Cloud providers that through some specialized networking hardware and software become part of a company's internal network. Some of the traffic is being routed over the network but appears to be local to the company.

Today for a startup that is trying to make the most of the dollars it has, using a combination of public and private Cloud infrastructure makes sense. 

Tuesday, January 18, 2011

10 Technologies for 2011: 3. Application Integration Appliances

3. Application Integration Appliances


There was a time when integrating Enterprise Applications was an esoteric art, requiring a team that had skills that spanned business analysis, deep technical expertise in specific formats required by EDI (Electronic Data Interchange), deep programming skills in a no-nonsense programming language like C or C++ or Java, a guru or two in XML and a lot of luck. Oh, and money. Or you could buy extremely expensive integration platforms that spawned a whole eco-system of administrators, functional experts, third party consultants and huge licensing fees.

Gradually over the past few years the realization has been growing that to integrate the most common Enterprise Applications in standard ways did not require a whole lot of magic. If only a simple platform could be created that could re-use the large and mature XML and Web Services body of knowledge, allowing for the most common integration patterns to be quickly realized without the need for complex programming.. Well if only that would be true, we could reserve the ordeal of truly complex integrations for those few instances where the returns justify the effort. For the run of the mill integrations, we could just re-use patterns and configure instead of writing code. Or so goes the vision

The good news then, is that the time for such Integration Appliances has arrived: Systems from Cast Iron, Meddius and Boomi are some examples of Integration as a service approach. Cast Iron and Boomi offer varying blends of on premise and in the cloud integration(with Cast Iron strongest on premise now and Boomi strongest in the Cloud). These systems come with pre-packaged "connectors" that can talk out of the box to vanilla implementations of various ERPs, CRMs, Financial Packages, DBMS, File Systems etc. What you do not get, you can build re-using or building on existing templates.

Make no mistake: There is no substitute for the expertise, big iron and money that is required to implement complex integration that has to be highly scaleable, near real time and is mission critical for large corporations. I do not think that these integrations appliances are replacements for this segment of EAI. However a surprisingly large portion of the EAI landscape is silently getting commoditized without any fanfare (well unless you count IBM's acquisition of Cast Iron and Dell's acquisition of Boomi ..).

2011 is the year I think that such appliances will break into the IT mainstream toolset.

Sunday, January 16, 2011

10 Technologies for 2011: 2. Mobile Device Management Software

2. Mobile Device Management Software
   As the trickle of new mobile devices allowed to access Enterprise Data sources has swelled into a flood in the past few years, the management of these devices has become a major challenge for those responsible for provisioning, auditing, usage tracking and security of these devices. It used to be that the software for managing these devices fell neatly into two camps: Blackberry Enterprise Server from (BES) from RIM and ActiveSync from Microsoft. While RIM took care of its own devices, the rest of the world was typically managed through ActiveSync. Microsoft provided for a set of policies that could be enforced to varying degrees on different mobile devices. 

As the amount of data that can be stored on these mobile devices go up, the challenge of keeping that data secure requires enforcing specific policies such as requiring that all data stored is encrypted, that there is a password required on the device, that the device can be remotely wiped in the event of loss etc.

 Apple licensed ActiveSync technologies from Microsoft and this allowed iPhone users to access resources such as Microsoft Exchange without too much trouble. In fact iPhones self enrolled in ActiveSync in more user friendly ways than Microsoft's own Windows Mobile devices. The advent of Android has changed all of that. With a veritable explosion of devices on multiple carriers driven by Google's Android, tracking the specific capabilities of individual devices became an impossible task.  

To start with, there are so many flavors of Android from 1.6 to 2.3 already available, with many more in the wings. Tracking and enforcing policies on these various flavors is no small task.  For example a particular information resource in the Enterprise may require that any mobile device that is authorized to access it must support remote wipe of the device. ActiveSync may dictate that if a device does not support this policy it cannot connect. However this depends on the device telling the truth.

It is unclear whether the various stock Android flavors and OEM variations actually report the true capabilities of the devices it runs on. To make matters more complicated, Android as an operating system is famously open and allows users to change settings more than other platforms do.  Finally, there are known ways to bypass these restrictions by means of software that ensure the expected responses are sent back to ActiveSync regardless of the true posture of the platform. As a result ActiveSync in its current form has a hard time managing these devices.

Into this brave new world of rapidly shifting capabilities, policies and devices, 3rd party mobile device platforms such as Good Technology has stepped in with specific defenses against the subterfuge of software bypasses and promising to keep pace with the relentless progression of devices in the Enterprise. We can expect Microsoft to also step up its game with newer versions of ActiveSync and potentially a stable of third party plugins. 








10 Technologies for 2011 : 1. Mobile Devices that blur the line between Smartphones and Computers

As 2011 gets underway in earnest, and the C.E.S. show is behind us, its time to look at 10 technologies that are going to dominate 2011 for CIOs and CTOs.

1. Mobile Devices that blur the line between Smartphones and Computers
    The launch of the iPhone in 2007 changed the world of enterprise computing forever. Although Microsoft and Palm had competed for mind and marketshare for more than a decade with various versions of PocketPC and PalmOS based devices, they never found an audience beyond the heavy power users who treated them as highly specialized PDAs. The iPhone changed all that. With the launch in the full glare of the media propelled by the polished presentations at MacWorld and Apple WDC by none other than Steve Jobs, suddenly the smartphone became "cool" and a status symbol at work.

The launch of the AppStore with support for third party apps made the phone genuinely useful for casual users. The email and calendar sync functions the phone brought with it, drove users to question the need for two devices: One for personal use and another for corporate use (typically a Blackberry). More often than not, the Blackberry and its physical keypad were given up in favor of the iPhone's capacitative buttons, the personal communication device and the business calendaring and email merging together.

Now more than 5 years later, we have seen an accelerating trend: More and more powerful smartphones that are in effect little computers in themselves. The iPad has taken up a niche between a phone and a full fledged laptop and its thriving in its spot. The Motorola ATRIX introduced at CES carries this trend even further: A single Android based device that can be your phone when you travel and when you reach your office, it can be docked and used to drive a full video display with a full keyboard.

This trend of devices that are crossovers exhibiting qualities of both Smartphones and Computers will only accelerate with the arrival of the iPad 2, the RIM Playbook and the various HP Web OS based devices to be announced in Q1 of this year.