Tony Lock, originally published on The Register
It’s treating the root cause of rigidity that really matters
For most of the past 20 years, the provisioning and daily operations of IT systems has been mainly concerned with specifying the physical components – servers, storage and networking – required to deliver the expected service levels.
Because many workloads have little hard data upon which to base usage calculations, most application provisioning has been based on guesstimates of expected peak workloads and possible growth down the line.
This may not be good enough in the future. Today’s economics and rapidly changing business requirements mean that more demands are being placed on IT for enhancements to services or for completely new ones.
The old ways of working are clearly becoming less effective. Our research indicates that most IT organisations feel they are not very well equipped to deal with the level of change requests they receive.
Attention is therefore switching to more dynamic IT solutions. A recent poll suggests that while dynamic workload management is still in its early days, adoption is beginning to ramp up (Figure 1).
But a move away from fixed infrastructure to a more fluid system that caters for varying workloads means that several challenges must be addressed.
One of the most important is that the configuration of the system must be capable of meeting the requirements of any service level agreements (SLAs).
Getting these SLAs in place is a challenge in its own right, but sticking to them is a whole other ball game. Only a minority of organisations are happy with their ability to monitor the quality of the IT services they deliver.
Research over many years has indicated that those with better service level monitoring capabilities have users who are happier with the services that IT provides, yet measuring service levels has not enjoyed much attention.
Quite often the only service level likely to be reported was an indication of an application’s availability. But users’ expectations now are that all services are working all the time.
Information is increasingly being sought on application performance in terms business users can relate to, such as response times, transactions processed per hour, numbers of customers served and so on.
In addition, as IT systems become more dynamic in their use of resources IT departments will need to adopt end-to-end approaches to monitoring service quality. The aim is to ensure that sufficient IT resources are allocated to meet users’ needs, but no more than sufficient in case utilisation rates fall and service costs rise unnecessarily.
Allocating IT resources requires more than just measuring service quality, though. IT must be able to make changes to systems and resource components quickly, reliably and repeatedly, often across a wide portfolio of devices and applications.
This is difficult if you simply rely on people to implement policies and change management processes. A new approach is needed, with management automation assisting wherever possible to respond and reconfigure.
The pressure is on IT departments to integrate the tools used to administer the various parts of the infrastructure, yet few have started to do this. More than 80 per cent of organisations that responded to our poll have separate tools for managing servers, storage and networking (Figure 2).
Few companies view this lack of integrated management as a root cause of issues with managing infrastructure change. Instead the problem is perceived to be that there are too many priorities to juggle, or that staff are overstretched.
Break the boundaries
These are classic symptoms of an underlying lack of investment in joined-up management. The move towards dynamic IT requires IT professionals to rethink their use of management tools, focusing on integrated solutions that span the IT infrastructure and use automation as far as possible.
At the same time, it is also important to remove management boundaries and silos to allow requested changes to be implemented more rapidly.