We are generally pretty familiar by now with the concept of virtualisation on the desktop, where a virtualisation application allows a secondary guest operating system (OS) to run within the host OS.
Software and hardware have improved dramatically, to the extent that Microsoft even includes a complete virtual installation of Windows XP with some higher end versions of Windows 7 to aid compatibility issues. This has started to move virtualisation into the mainstream consciousness rather than being limited to the tech savvy community.
The ability to run virtual guest OSs can be very useful, allowing multiple different operating systems to run in parallel on the same hardware rather than requiring separate PCs. In this way, an engineer for example may have a main Windows PC but have some custom applications that run on Linux.
Instead of running two PCs, or dual booting, he can switch seamlessly between Windows and Linux as though they are just separate applications. The guest OSs are typically simple files that can be loaded in the virtual machine application, and it is usually fairly trivial to move the files to another PC with a virtualisation application, which usually has to be the same on both PCs.
This also illustrates a key point in how desktop virtualisation has been implemented so far – the virtualisation capability has been provided by a hypervisor application that is accessible only after the main operating system, which is installed natively on the hardware, has been able to boot. This is commonly termed a Type 2 – or hosted – hypervisor. Because the hypervisor is really just an application, the virtualisation capabilities they enable have not, so far, really changed the way that PCs are fundamentally used and managed.
Recent technology developments are likely to alter things. Type 1 hypervisors – or bare metal hypervisors – do not require a host operating system to function. They work at the lowest level, before any OS has been booted, and have become the main tool of virtualisation for servers. Both Citrix and VMware have developed these low level hypervisors for use on client PCs, be they desktop or notebook, taking advantage of the hardware support built into many modern machines.
By running a sophisticated Type 1 hypervisor, whole new approaches to provisioning, updating, backup & restore, and support can be considered. This is because with the low-level virtualisation capability, the requirement for the main host OS to be natively installed goes away. Freeing the operating system from the hardware can allow, with appropriate licencing and tools, a central virtual image to be loaded and run, rather than having to be installed from scratch. This can greatly reduce the time required to set up new PCs and get them into users hands. It can also get users back up and running quickly should they have a hardware or application failure.
Backup and recovery can be simplified too. Tools can take snapshots of the virtual OSs, and allow them to be recovered should corruption or loss happen. With centrally stored virtual machine snapshot libraries, techniques such as de-duplication can greatly reduce the storage footprint. Restoring the OS can become just as simple, with the image readily available from a separate partition, portable image disk or over the Internet.
With these techniques, it will even be possible to use bare metal virtualisation to enable much greater flexibility in how people can work both with or without their PC. With the OS as a virtual machine, snapshots can be easily taken and whole images or just recent changes uploaded to the network. Should the PC be left behind, users may be able to log onto a different PC and have the image of their OS loaded onto a server and run via a thin client application – giving them their own personal PC served over the Internet. Or, if their PC needs to be replaced for some reason, the stored image could easily be downloaded to the new PC, allowing the employee to continue with their personal system after a very short period of downtime.
These features all offer much promise for reducing the pain that many feel with OS installation, backup and support. As always though, there is no such thing as a free lunch. Moving to a hypervisor based deployment model may require significant investment, even if it offers the opportunity to simplify things. Management tools as well as operational processes will likely require a thorough rethink, and licencing may prove to be a headache as many applications and OSs are today licenced on the basis of running on a single dedicated PC.
As the change will involve a different way to install and manage the OS, tying it to an OS refresh may help the business case, particularly if the refresh also includes new hardware. With Windows 7 gaining traction, it is an opportunity to investigate the potential for rolling out on top of bare metal hypervisors.
If not, it may require bringing forward hardware investment if a sizeable proportion of PCs are not able to support the new hypervisors, a concept that may be unpalatable for many IT managers or CFOs in these cash strapped times.
Content Contributors: Andrew Buss
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management