Security’s important, right? Well, so it may be – but when it comes to virtualisation, it’s not hard to get the impression that it isn’t being treated as seriously as it should be. I don’t know about you, but when I read about the take-up of virtualisation, the feeling of foreboding is not unlike seeing a five-year-old play with Daddy’s collection of Samurai swords – while nothing awful has happened yet, one can’t help thinking it’s a matter of when, not if.
At the very least there is some consensus about the risks, from a technological perspective. We have three points of potential weakness – the hypervisor or virtualisation engine, the management console, and the VM itself. Hypervisors have come under some scrutiny as far as security is concerned, but while no vendor is claiming that they are unbreakable, equally, security levels appear to be tested to the same level as one would expect an operating system to be tested. And finally of course, the usual OS, middleware and app vulnerabilities are still there to be addressed in VMs as much as physical systems.
Meanwhile the eggs-in-one basket factor has been mentioned by many respondents. If you have ten VMs running on a single hypervisor and that is compromised, then (it could be argued) your problems are going to be an order of magnitude greater than running a single OS on a server. It’s a similar issue that arises with the management console, for example if it is subjected to a brute force attack – that is, trying every possible password combination in the hacker’s dictionary – at which point all VMs under management could be rendered accessible.
But maybe the technological issues are being overstated. While vulnerabilities may emerge in either the hypervisor or the console, so the identify-inform-patch model applies, as it does for most operating systems in use today. That is, if you don’t trust the hypervisor any more than you trust your OS, then maybe you should be rethinking your IT strategy. But then of course, we have the VMs themselves.
On the surface, there should be no difference in security threat profile for virtual machines, than for physical machines. There’s a fundamental difference, however – which is more of a process issue than a technology issue. And this is where the problems really start.
Turning to process…
The fact is that VMs are fantastically simple to create. With virtualisation, theoretically gone are the days when ‘getting’ a machine required three-month procurement cycles (for the hardware anyway). The advantages of such a slow, onerous approach are both that risks are more likely to be considered, and that configuration and so on are more likely to be done by skilled staff in a controlled manner.
In the virtualised environment, however, it becomes more than tempting to create virtual machines without fully thinking things through. You know how it is – the boss is under pressure, the users are saying they want whatever it is by the end of the week, and it is oh-so-easy to bang the button and create that now-will-you-get-out-of-my-hair VM.
It could be worse perhaps – giving control over to developers or researchers with a “There you go, fill your boots” attitude could be a recipe for security disaster. As well as creation, VMs are equally straightforward to move around. VMware’s had VMotion for a while, and Microsoft has just announced a similar capability in its latest Windows Server release.
Meanwhile, these and other vendors (such as Citrix and Sun Microsystems) are looking forwards to when VMs can be migrated in either a live or dormant state into the cloud. Herein potentially lies the biggest risk with virtualisation (that phrasing may sound like a cop-out but it’s not yet provable, just a little scary) that a whole, yet virtualised computer, whatever its workload, can be run, well, anywhere.
Perhaps the real problem is that nobody knows what the real problems are. Virtualisation is thrusting itself into the limelight as a mainstream technology, but few organisations have all the skills they need to deliver virtualisation at a production scale. And this lack of knowledge will cause problems of its own, in terms of management best practice. As one respondent pointed out, for example:
“The points about ’virtual server sprawl’ highlight that there are risks, no least of which is the potential for forgotten test applications lurking, unpatched and unattended, in the virtual environment.”
As security professionals know, many (if not most) breaches are down to exactly this kind of situation – where technology is left to fester beyond the point where it is still protected. It’s not just about security either – thinking (as does Dparker) about risks means we should also be taking into account issues around availability, accidental data loss and inadvertent license abuse when the IT environment just becomes too complicated to manage.
So, what to do? For once Tony Blair could be right with his ‘education, education, education’ mantra. The challenge is that while the principles are sound, the practice of managing what promises to be a more dynamic IT environment than in the past remain immature and understood only by a minority.
If we could offer any advice at all, it would quite simply be: get a clear (preferably documented) picture of what you are trying to achieve, and get yourselves trained up while there is still time. Otherwise, and at the risk of mixing metaphors even more, you could indeed be sleepwalking into a rat’s nest.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management