Jon Collins, originally published on Infosecurity
Given that IT security is all about understanding, and mitigating risks, it is inevitably going to have to keep up when new technologies come to the fore. In general, anything new follows the law of unintended consequences: the cry of “But you’re not supposed to use it like that!” is familiar to anyone in IT operations. Meanwhile of course, the “bad guys” will be all over anything new like a rash, looking for ways to exploit its weaknesses or the holes it may expose.
Server virtualisation is one such ‘new technology’, which many organisations are still just piloting before they embark on larger scale consolidation projects. From a cost cutting perspective it holds great promise – but as is frequently the case, the security guys are not always getting a look in. This could be a mistake given some genuine risks, which should not be left untackled.
Some of the risks are simply to do with the potential for flaws in the technology. For example, while the hypervisor may provide an additional, securable layer, the downside is that if the hypervisor (or indeed, the management tool overseeing a number of hypervisors) is compromised through a brute force attack, then the virtual environment could be left wide open.
Other risks are more operational. Consider how with virtualisation, workloads no longer have to be tied to specific physical machines, which has an enormous operational benefit. However, the constraint has traditionally been used to security’s advantage, as it means workloads are easier to protect physically. It is not hard to imagine a scenario in which a virtual machine is relocated to a server which lacks the right level of physical protection – in principle, it could be moved off site completely, without anybody noticing!
Virtualisation is not all bad – indeed, it can bring with it a number of security advantages over using physical systems. Not least that, as far as data is concerned, it adds an additional layer which can itself be secured. This is not an automatic protection against a sustained attack. However it does reduce the chances of data leaks or ‘accidental’ prying, as long as the portability questions are taken into account.
Virtualization also brings with it an additional degree of resilience. Virtual environments can be configured to incorporate fail-safe mechanisms, so if a virtual machine goes down, it can be started up elsewhere (or indeed, two machines can be running in parallel with replicated state). In addition specific applications can be run in their own virtual machines, meaning that if one is compromised or goes down, it is less likely to bring others down with it. This is particularly useful for applications reputed to be more at risk.
These are early days for server virtualisation, and so we need to be careful not to assume it’s a done deal from a security perspective. Skills and experience are still lacking, particularly around best practices such as change management, which is in principle expected to operate at the speed of virtualisation rather than the slowest-responding person in the process. Meanwhile there is a paucity of virtualisation management tools that take security into account.
Given all of these factors, while organisations reap the cost-saving benefits that initial virtualisation exercises can bring, it becomes more important than ever to move forward with eyes wide open. Now’s perhaps the time to let the security team do their work: an appropriate level of due diligence in the short-term is key to ensuring that new risks aren’t being introduced, which could come back to bite in the future.