Towards dynamic systems management methodologies

I do take my hat off to the people who first put together the IT Infrastructure Library (ITIL), which (like the end of the cold war) celebrates its 20-year anniversary this year. It’s one thing to learn, both on the job and with little support, the ‘golden rules’ and best practice principles of any discipline. It’s quite another to have both the gumption and skill to document them in a way that makes them usable by others. And I should know – the time I spent working in the methodology group at a large corporate was a fair illustration of how tough this can be.

So, when the books that made up what we now refer to as ITIL were first released, they must really have hit the nail. First adopted by public organisations in the UK, they have since become one of the de facto standards for large-scale systems management. Their authors can feel rightly proud; as indeed can the authors of other best practice frameworks that have, through force of adoption, been proven to hit the spot.

However, there could be a fly in the ointment, and its name is dynamic IT – the latest term being applied to more automated approaches for managing the resources offered by our datacentres. I know, I know, this is one of those things that people have been banging on about for years – indeed, for at least as long as ITIL has been around, if not longer. So, what’s different this time around?

There are a number of answers, the first of which is virtualisation. While it is early days for this technology area (particularly around storage, desktops and non-x86 server environments), it does look set to become rather pervasive. As much as anything, the ‘game changer’ is the principle of virtualisation – the general idea of an abstraction layer between physical and logical IT does indeed open the door to be more flexible about how IT is delivered, as many of our recent studies have illustrated.

The second answer has to be the delivery of software functionality using a hosted model (software-as-a-service, or SaaS for short). No, we don’t believe that everything is going to move into the cloud. However, it is clear that for certain workloads, an organisation can get up and running with hosted applications faster than they could have done if they’d built them from scratch.

I’m not going to make any predictions, but if we are to believe at least some of the rhetoric about where technology is going right now, as well as looking at some early adopter experiences, the suggestion is that such things as virtualisation and SaaS might indeed give us the basis for more flexible allocation, delivery and management of IT. We are told how overheads will be slashed, allocation times will be reduced to a fraction, and the amount of wasted resource will tend to zero.

We all know that reality is often a long way from the hype. If it is even partly true however, the result could be that the way we constitute and deliver IT service becomes much slicker. IT could therefore become more responsive to change – that is, deal with more requests within the time available. In these cash-strapped times, this has to be seen as something worth batting for.

But according to the adage, the blessing might also be a curse, which brings us back to the best practice frameworks such as ITIL and what is seen as its main competitor, COBIT. In the ‘old world’, systems development and deployment used to take years (and in some cases, still does) – and it is against this background that such frameworks were devised.

My concern is how well they will cope should the rate of change increase beyond a certain point. Let’s be honest – few organisations today can claim to have mastered best practice and arrived at an optimal level of maturity when it comes to systems management. Repeatedly when we ask, we find that ‘knowing what’s out there’ remains a huge challenge, as do disciplines around configuration management, fault management and the like. But in general, things function well enough – IT delivery is not broken.

The issue however is that as the rate of change goes up, our ability to stick to the standards will go down. Change management – that is, everything that ITIL, COBIT and so on help us with – has an overhead associated with each change. As the time taken to change decreases, if the overhead stays the same, it will become more of a burden – or worse, it might be less likely to happen – increasing the risks on service delivery.

To be fair, methodologies aren’t standing still either – indeed, ITIL V3 now builds on the principle of the service management lifecycle. But my concern about the level of overhead remains the same: ITIL for example remains a monolithic set of practices (and yes, I know, nobody should be trying to implement all of them at once). There’s part of the framework called ITIL Lite, designed for smaller organisations, but to be clear, the ‘gap’ is for an “ITIL Dynamic” for all sizes of company. In methodological terms, the difference would be similar to DSDM and its offspring, compared to SSADM in the software development world – fundamentally it’s the difference between to-down centralisation, and bottom-up enablement.

Perhaps the pundits will be proved wrong, and we’ve still got a good decade or so before we really start getting good at IT service delivery. But if not the question I have therefore is, how exactly should we be re-thinking systems management to deal with the impending dynamism? We could always wait for the inevitable crises that would result, should the dynamic IT evangelists be proved right this time around. But perhaps it’s time for the best practice experts to once again put quills to a clean sheet of paper, and document how IT resources should be managed in the face of reducing service lifetimes. If you know of any efforts in this area, I’d love to hear about them.

Leave a Reply

© Freeform Dynamics Limited - 2006 - 2024. All rights reserved. Unauthorised use of copy is not permitted.