“Move along, nothing to see here,” said a self-confessed “old timer on the virtualisation front,” in response to the question of whether server virtualisation was ready for prime time.
Indeed it is difficult to read anything about virtualisation without getting the impression that it is inevitable. But while some might already be some way down the virtual track, feedback garnered in the course of a virtualisation lab on The Register has made it pretty clear that others are still on the starting blocks. “We’ve taken a few steps,” says another respondent. “We just migrated one of our production servers from a physical environment to a virtual environment. It’s running on ’dedicated’ hardware, but it’s still a step in this direction.”
While the level of adoption may vary, the verdict would appear that virtualisation technology is ready to do the job it was designed for. “Hell yeah it’s ready,” says Nate Amsden. “I still think it requires some intelligence on the part of the people deploying it, you can get really poor results if you do the wrong things (which are by no means obvious). But the same is true for pretty much any complex piece of software.”
So, where to start? Virtualisation comes with a suck-it-and-see mode, in that there is little to stop anybody running up a virtual machine and seeing what gives. From this point, as we have seen in previous research, the logical first step is in server consolidation, which has a reasonably straightforward business case.
From there, the question is how to scale things up, or indeed out, such that virtualisation becomes more the norm than the exception? Let’s be clear: at the moment still, virtualisation is not going to be the answer to absolutely every workload, not according to your experiences anyway. In the case of databases for example, feedback suggests that a top-end database workload needs all the resources it can get, in which case virtualisation would be wasted on it. “If you don’t care about performance then you can use VMs, but what would be the point?” said one reader.
But on the plus side the flexibility offered by virtualisation is seen as a major advantage, databases or no databases. Consider:
If I have a fully populated blade centre (7-14 servers depending on who we’re talking about), all in a single VM pool, and I then fill 33-50% capacity with Virtual machines dedicated to database server work, I can further fill out the other 50-66% VM pool with Application/web/file server VM’s, probably squeezing upward of 200% capacity out of the given rack space…
The role of management
Meanwhile, once you’ve made the transition, you’re going to have to manage what you’ve deployed. Virtualisation does appear (from your responses) to make things easier to move around, plan, keep available and so on. But it doesn’t take prisoners – you’re still going to need the technical smarts and suitable processes to deal with the virtualized environment. While Trevor Pott might have confessed to insufficient caffeine when he talked about Tetris, he did make a valid point about thinking architecturally:
It really all boils down to VM Tetris. As much as virtualisation enables ease of administration and management, you still have to understand the workload of all your VMs. You have to understand the capabilities of your hardware. You pack your VMs in with other VMs in such a way that they won’t impinge on one another, and you can do remarkable things. Some days you get a box, or an L, some days you get a squiggly jaggy thing you have no idea where to put.
By the way, if you think that metaphor’s a stretch, it’s worth sharing Jimmy Pop’s Fishmonger analogy:
Each fishmonger is a virtual machine with a hypervisor controlling access to resources. If we allocate them efficiently, we can all get to the pub sooner as the work is done with less bottlenecks! And we can save money, instead of hiring 50 fishmongers when there is really only work for 10.. (even if each fishmonger works on a specific type of fish and can’t handle the others, in this case, each would be a virtual fish-monger running inside 10 real fishmongers operating 24/7 with no sleep..)
Meanwhile, back in reality, one point that came up time and again was: remember that whatever you’re doing in the virtual world, will ultimately depend on the physical world. This point manifested itself particularly in terms of getting the RAM levels right in advance, and minimizing back-end bottlenecks – that is, between the virtualized servers and whatever resources they need to access.
Management does appear to be an area of cost that hasn’t yet been fully bottomed out. Tools can be expensive, and as we have seen in research, management overheads can be greater than expected particularly if you want to take advantage of all that dynamic goodness brought by virtualisation.
Load balancing VMs across your server estate is something that occupies a lot more time than I would have thought when I started using virtualisation. The more I work with it, the more I realise just how much easier this would all be if we could only afford all that stupendously expensive management software. We’ve managed to overcome this with some very strict procedures and a *lot* of scripts, but someone just starting out would not necessarily see a reduction in maintenance overhead. I think that if you have the real management tools to accompany virtualisation it can be a phenomenal time saver.
Unfortunately, not every IT shop is going to have the scripting skills of some of the readers (including Nate ’20,000 lines of code’ Amsden). And as another reader said in answer to our article on management, money for tooling is not that easy to come by:
It requires that mythical substance known as ’money’. This is something you cannot pry out of the hands of the copper-counters with a plasma rifle. No matter the business case, they seem unable to pay for any form of software based on the concept, “it saves [individual] time.”
To be fair, the virtualisation wave has happened so fast, it has taken some of the traditional management tools vendors time to catch up – particularly around the management of both virtual and physical from the same console.
Says Nate:
At a couple different events I’ve been to I’ve come across companies specializing in VM management, but it’s rare that they can extend that expertise to physical systems as well, when I ask them their faces just go blank.
Management is about risks as well
From a production perspective, the last thing we should mention in this context is risk management – into which we can include security, disaster recovery, data protection and so on. Most respondents seem to think that virtualisation brings quite a lot to the party in terms of new options – for example snapshots can be taken of entire machines, for backup or DR purposes, and if a physical server fails, it is relatively straightforward to re-start the virtual server somewhere else. “Personally, I trust my VM backups more than my tape ones,” said one enthusiastic correspondent – your own mileage may differ, particularly if you’ve suffered from VM proliferation and your backups haven’t kept up!
On a more sober note however, perhaps we should close with the thought that virtualisation really is a two-edged sword – used incorrectly, it could cause as many problems as it solves. “Don’t keep all your eggs in one basket” was the advice from more than one reader. The trick appears to be balancing the inevitable basket-ness of consolidation-by-virtualisation, with hard-earned common sense about relying too much on too few physical systems:
If you go Borg, then one virus in the main matrix blows it all. If you then say well I will start putting up Checkpoint Charlies, then you move away from consolidation and into autonomous systems anyhow.
It will always be tough to get this balance right: indeed, as we move forward with virtualisation we shall undoubtedly run into the ruts created by our own, very human traits of poor planning, inadequate investment and the vain hope that bad things only happen to other people. But be in no doubt that virtualisation is ready for prime time as a technology, even if its management ecosystem (and indeed skills base) is still evolving.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management