We know from numerous research studies that, while there are indeed ‘bad guys out there’, in most instances the causes of downtime are far more mundane – application failure, connectivity problems and the infamous ‘blue screen of death’.
We can’t blame Microsoft for everything (as I discovered a couple of days ago when I disconnected a USB device from a Mac, to experience for myself the ‘grey mist of doom’ that may be better known to OSX users).
But does any of it really matter? Vendors talk about ‘productivity improvements’ that can be linked to the latest iterations of their products, and indeed I’m sure I’m not alone in experiencing the (transient) joy of running an application on a better, faster machine. But it is very hard to tease apart the relationships between the equipment and services that IT delivers, and the achievement of goals by business users.
We know from last week that many issues can be put down (at least in part) to a lack of knowledge or experience on the part of the user, but it was also clear from the mini-poll that technology has its part to play. So how exactly should we measure IT service delivery, and is it possible to link it to business productivity? Is it really enough to lean on ain’t-broke-don’t-fix-it structures such as uptime measurements, resolution times or support calls dealt with, or can we be more canny about such things?
One thing’s for sure – a purely technical view is never going to be enough. Consider email for example – few if any organisations today would accept anything but ‘always-on’ service from their email systems. But equally, most people we speak to are suffering from the deluge of communications that the electronic world has enabled. It is difficult to link IT and productivity, without questioning whether IT (working well) is at times causing us to be less productive.
Perhaps technical measurements are irrelevant anyway. I have written elsewhere (viewsfromthebridge.wordpress.com) about the bathtub curve, which suggests that it’s far more important to keep a well maintained, up-to-date pool of kit than trying to calculate how well (or badly) things are going. And equally, from the business perspective, good management, delivery focus and all that will be far more important than the endeavours of a few well-meaning individuals in a poorly run organisation. Productivity may indeed be far more complex than we have the wherewithal to measure.
As it is unlikely that we shall ever arrive at a situation where things ‘just work’, or even a clear picture of how exactly things stand across the organisation, perhaps we should be looking for a middle ground of good-enough-ness where most things are compatible, most kit is up to date and most users are generally happy. Or perhaps that’s a cop-out – and we should be striving for a clear view across all our desktop assets, enabling rapid turn-around when it comes to support calls.
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management