Jon Collins, originally published on The Register
IT may be complex, but from the perspective of the business, it is just a lot of technical gubbins that sits between the screen and the data.
Users access applications and systems with no real clue about what goes on behind the scenes, nor any desire to understand more than how to change a toner cartridge. We can rail at their foolishness (users do stupid things, to be sure), but it’s hard to argue with the expectation that IT should “just work” from a business perspective.
From the standpoint of IT management however, things are not that simple. It doesn’t take much to cause problems, given the fragile web of interdependencies between software and hardware. An innocuous-looking operating system patch can render entire application stacks useless, and what the users perceive to be small changes (“I know, why don’t we all use our video cameras?”) can have a significant impact on the infrastructure.
Complexity is part and parcel of the challenge for IT managers. Having researched this area extensively over the years, our conclusion is that the law of diminishing returns applies at a fundamental level. That is, the more you try to manage, the harder it becomes to make incremental improvements.
From an application management standpoint, IT success is often based on delivering on the following, all of which have a tangible business impact:
Availability – “I log on and access the applications I need, end of story”
Performance – “The application doesn’t run like a snail when I’m doing my day job”
Scalability – “We all log on at the same time without having to go and make a cup of tea as the system boots up”
IT complexity raises a number of issues, not least the lack of visibility on how criteria such as these are impacted by changes to the infrastructure.
In the “let’s use video” example, an increased load on the network will deprive applications of bandwidth and increase their latency. Users know none of this – they simply wonder why it’s taking longer than usual to log in to a system, or to open a file. Few organisations have tools to monitor response times from a user perspective, nor diagnose the causes of problems when (or indeed before) they arise, so it is understandable that helpdesk calls about application performance are some of the most common we see.
Where tools are used, they can have a “wishful thinking” approach to IT management – for example offering seemingly useful features like auto-discovery, while ignoring that the picture you end up with can be far broader than you are capable of managing with limited resources. Even if you end up with a good picture of what the IT environment looks like, chances are it will be out of date within a few months.
Don’t get me wrong: as someone who used to wish he could afford them, I know there’s a lot to like about IT management tools. All the same, the tools can hit a glass ceiling of their own – not only do the majority of organisations only use a subset of functionality, but also the overheads of using them start to outweigh the benefits of automation. Many organisations have “legacy” management tools, acquired over the years to resolve specific issues, but such tools can lead to a fragmented view of the IT environment, making IT management more people-intensive, which undermines the point of using the tools in the first place.
Given the joint pressures of increasingly flexible working and the adoption of software-as-a-service for specific applications, IT looks set to continue down the path of complexity. The goalposts for IT management will continue to move. Precisely because this is the case, it is ever more important to focus on the basics of application delivery from the perspective of the business user.
Tools can help – it would be difficult to do without them in scenarios like patch management for example – but whatever changes occur in the future, the law of diminishing returns will continue to apply.