A reasonably fundamental principle of virtualization is that it creates a layer of abstraction between a virtual machine and the physical hardware. As we have already discussed, this allows multiple virtual machines to run on a single physical machine, and also can enable a virtual machine to be moved quite straightforwardly from one physical machine to another.
Within the data centre this has a number of benefits, such as workload balancing (server too busy? ’Simply’ move one or two VM’s onto a less populated server), higher availability (virtual machines can be moved off a physical server so it can be replaced, upgraded or fixed) and so on.
But hang on – if it’s that straightforward to move virtual machines, what’s keeping them from moving outside the data centre altogether? One obvious scenario is to move a machine onto hardware run by another company.
Third parties such as CSC, IBM, EDS and Rackspace have run server environments for use by their clients for many years, using a number of names such as ‘hosting’, ‘service provision’ and so on. These companies have been joined more recently by companies such as Amazon, which prefer to label themselves ‘cloud providers’.
Indeed, the older hands at this game have found the lure of the cloud irresistible, and have been launching repackaged cloud services of their own. The current marketing bucket for all such services is ‘Infrastructure as a Service’. Without getting too much into the nuts and bolts of it all, the open question from a virtualization perspective is, if a machine is virtualized and therefore movable, what are the benefits and costs of running it in the cloud?
Any challenges are likely to be around managing the associated risks. There is something about keeping IT in-house, within the firewall where it at least appears better protected and more under control. Taking a workload and giving it to any old Tom, Dick or Harry to manage can be fraught with danger, particularly if the data being processed is sensitive.
With this in mind, it’s still possible to imagine several likely scenarios which possibly boil down to the following factors:
- How practical it is to move a given workload in the first place (for example, in terms of network bandwidth)
- How much management and control is required – is the workload something that can ‘just run’?
- As mentioned, the sensitivity of the data and application involved
- Legal and compliance issues around geographic location of data
So, number-crunching of non-confidential information (for example in analytics or research) might be a quick win, whereas that business-critical system on which pricing information is changed on a daily basis might need a little more thought before it is shifted off to a data centre goodness-knows-where.
An upside of the cloud is that it creates some possibilities that just didn’t exist before. Smaller companies for example told us how they are now able to create a disaster recovery ‘site’ which replicates their core systems as virtual machines, whereas before (with physical servers) the costs would have been prohibitive.
In companies of all sizes, as with virtualization itself, we are seeing earlier adoption of IaaS in development and test environments. The ability to create one or more sand-box replicas of a live environment, which can be built and deleted as necessary, is highly compelling. Similarly, scientists requiring to run a set of compute-intensive algorithms can now do so, rather than just wishing the possibility were there.
These are still early days, and we are a long way from handing over our IT environments (virtualized or otherwise) to IaaS providers. Or are we? Common sense suggests that we are a long way off wholesale adoption of such an underdeveloped technology or concept as cloud. However, historical examples such as outsourcing teaches us that sometimes organizations can throw common sense out of the window as they try to save a quick buck in the short term. Yes, we have seen it before.
IaaS is not wrong in principle – and indeed, there are plenty of examples of where it may well be able to save organizations a lot of cash at the same time as bring flexibility and higher levels of service into the mix. For example, in the future, organizations struggling with managing their desktop estates (and who may well be looking at desktop virtualization) might indeed be better off handing their desktop management ills to a third party.
But there is still plenty to do before anything other than discrete, low-sensitivity workloads can be run in the cloud, not least in terms of architecture, security/legal and costing models. We’ll cover these off in the next article, as well as considering some of the due diligence aspects that can be taken into account during selection and procurement.
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch