Jon Collins, originally published on Computing

Roughly speaking, availability management is what your business does to ensure IT services stay up when expected and agreed – and continuity management is what you do if something goes wrong. If it goes horribly wrong, you have disaster recovery.

These terms share one characteristic – that anyone who expects IT to just work is either new to the trade or in the wrong job. What’s harder to grapple with is deciding what to do about it.

The IT department’s remit is to deliver services at an agreed level of availability, among other requirements. This is usually expressed in terms of percentage. So 99 per cent availability might sound attractive, but when it is considered that unavailability will occur for 3.65 days in a year, attention turns to whether those days could fall in the middle of a reporting cycle.

I know how easy it is to get this wrong. When I became an IT manager, I ran backups on a Monday morning, rendering most of IT inaccessible for a couple of hours. In hindsight, it was mad.

Thinking from the perspective of how systems have traditionally been built, availability need not be hard. In the silo model, where an application runs on a database, which in turn depends on an operating system running on a server, it is straightforward to ask yourself what availability characteristics the business is looking for, and design the system accordingly.

In practice, upfront costs can scupper best intentions when it comes to availability. A couple of years ago, a Freeform Dynamics report highlighted how “much of the exposure leading to high failure rates comes about because system availability is only considered towards the end of the project life cycle” so best practice isn’t necessarily common practice. The question is, what happens to availability if we start to take into account the real trends we see in IT?

Virtualisation brings its own availability tools. Because it is easier to manipulate a virtual machine than a real one, it is more straightforward to keep it available.

But virtualisation could be its own worst enemy when it comes to availability. Virtualisation makes it easy to create VMs – and judging by the lack of forethought that can go into system design, it’s not hard to imagine places where it looks like a sorcerer’s apprentice has taken over.

A second trend is less about the services, and more about how and where they are delivered. The move towards more distributed workforces, with an increasing reliance on smartphones and other such devices, means we might need to reset our expectations of what we mean by availability. The concept of office hours is becoming increasingly blurred, as is the concept of office equipment and even office applications.

What about hosted services? Believe the hype and you’d think availability would become an issue of the past, with existing kit being replaced with internet-based offerings.

But we are playing with fire here. When was the last time you checked the Ts and Cs on your hosted email, for example? The chances are that liability clauses will not be stacked in your favour, particularly if you depend on “free” (advertising-funded) services for blogging, email and document creation and sharing.

As one IT professional recently said to us, “People don’t want computers, they want what computers do.” Just how clear are you on the services IT should be providing to your organisation, wherever they are sourced? If you find this question difficult to answer, spend time working out what these services are, and exactly what is seen as acceptable when it comes to availability levels. With availability, hope is not a strategy.

Share

Comments

Leave a Reply