Getting there – the road to virtualisation

Despite the annual technology prediction-fest that pollutes the airways each year, it’s fair to say that nobody really knows ‘where it’s all going’. And that’s good, because the world would be a bit boring otherwise. What we do have is a reasonable understanding of the direction in which things are going in general in IT. Plenty of room exists for exceeding expectations or horribly underachieving them.

However, when it comes to individual trends or initiatives, we can afford to be a little more prescriptive. Take virtualisation for example. If you applied a generic ‘4 stages of maturity’ model to it, it may look a bit like this from a practical point of view:


Chaotic: You just got started, it’s all very exciting; everyone is provisioning servers left, right and centre without telling anyone and not spinning them down when they’ve finished. All of a sudden you’ve lost half your bandwidth / storage / etc. A production system falls over because security policies didn’t extend to ‘the virtual bit’ and now you’re being shouted at.

Controlled: The initial thrill has worn off and sense is prevailing. Ideas around formalising the exploitation of virtualisation technology are surfacing, and there are less and less ‘gotchas’ surprising you. Pilot projects are starting to prove their worth and you want to scale up to a ‘real’ live system. Questions are starting to be asked around management capabilities.

Managed: Everything is working pretty well. The concept has successfully transferred from a pilot / test environment to production, and you’re thinking about what else you can bring into the virtual environment. The only downside is that there is quite a bit of manual work being done, and a lot of screen switching. A legacy management tools effect. You can work the virtual environment to your advantage but it’s effortful and things may start breaking if you are asked to extend to new applications or services.

Dynamic: You’re in the hallowed place: automated provisioning, failover, backup, recovery and storage. It’s a fluid, optimised environment and you can add and remove capacity at will, thanks to equally dynamic relationships with various service providers that you know and trust. Sometimes you think it’s all a bit too good to be true.

But is this a realistic way of looking at things? As industry analysts we’re often asked to comment on these sorts of approaches, which to be fair, are usually aimed at broader topics. They don’t work in a practical sense at a macro level because it’s normal for an organisation to be down at one end of the scale for certain things, in the middle for others, and perhaps class leading for others. There are too many variables for it to make sense as anything other than a thinly disguised product list. And many of them often are just that.

However, at a micro, or single issue level, there may be some value in a scale that transcends a list of one-off project milestones. You could see it as a one way life cycle. The values on the scale or the thresholds between bands could be specific to a medium or longer term goal, a desired capability or a service level performance.

One of the issues in models such as these is of course that nobody ever actually reaches the nirvana of Stage 4. This is a harsh reality check: if indeed, most organisations hover somewhere between ‘controlled’ and ‘managed’, is there really any point in considering what needs to be ‘dynamic’? Of course, it may be that virtualisation holds some kind of magic key to help here, but we doubt this is about technology alone.

Click here for more posts from this author

Through our research and insights, we help bridge the gap between technology buyers and sellers.