An elderly lady attended a public lecture given by an astrophysicist on how the Earth goes around the Sun and how the Sun circles about with countless other stars in the Milky Way. During the question and answer session, the woman stood up and told the distinguished scientist that his lecture was nonsense, that the Earth is a flat disk supported on the back of an enormous tortoise.
The scientist tried to outwit the lady by asking, “Well, my dear, what supports the tortoise?” To which she replied, “You’re a very clever young man, but not clever enough. It’s turtles all the way down!”
A Brief History of Time, Stephen W. Hawking
For decades, modularity has been a familiar best practice in programming terms. When Ed Yourdon and Larry Constantine wrote their seminal paper on structured design in 1975, they were building on themes already considered for languages such as Algol 68. At the core of structured design was the idea that software modules should maximise cohesiveness within themselves, whilst minimising coupling (aka dependencies) between modules.
This principle was built upon by luminaries such as Grady Booch, when he put his own spin on what made a good software ‘object’ back in the early days of object-orientation. The principle was to choose artefacts that were least likely to change. In business these tend to be the entities around which the organisation is constructed – policies and claims in insurance for example, or patients and prescriptions in healthcare. Brad Cox, another early thinker who devised the Objective C programming language, suggested that software should work in a similar way to hardware. By talking about software IC’s, he abstracted the details of the internal workings, instead placing the emphasis on the interactions enabled by the interfaces that communicate with the external world.
These ideas formed the backbone of best practice in Object-Oriented programming, which begat component-based development, which begat service-oriented architecture (SOA), web services and so on. Throughout the evolution of application development, design and architecture, the core principle of focusing on the service being delivered, as opposed to the technical gubbins required to deliver it, has remained key.
A similar process has taken place throughout the history of IT operations. Right from the beginning of commercially available IT, for example, many organisations were unable to afford computer facilities of their own, and relied on ‘service bureaus’ for their payroll and so on. Meanwhile, as IT has evolved to be considered in terms of the services being delivered, rather than the facilities provided, disciplines such as IT Service Management (ITSM) have emerged to enable their end-to-end management. From both a development and an operational standpoint, good IT practice has always been, and probably always will be, based around understanding the services it provides. It’s important to recognise that the services are the same, whether they are being discussed in terms of how they are developed, or how they are managed. Without getting into motherhood, the principle behind a service lifecycle, which considers a service from its inception right the way through to its ultimate removal, is sound.
There’s a couple of clever twists on the service theme, first of all that many IT departments today are as much about sourcing things in from the outside, as they are doing things in-house. As we have seen from our research, it is the smarter organisations that know how to source services. We have corroborated this message through numerous conversations with IT decision makers. Supplier management is not just about contracts, but also knowing what you want in the first place.
This principle extends into the Software-as-a-Service (SaaS) and cloud-based (hosted) IT delivery models. There’s nothing fundamentally wrong with these approaches to IT delivery, but getting the best out of them requires thinking about the service required, and then identifying providers that can deliver on the need.
Aww, and there I was saying I wouldn’t go into motherhood – but it is fascinating to watch some providers suggesting that such models are good just b
ecause they are ‘cloud-based’.
A second twist is that the theme plays very well outside of the IT department – to the extent that, looking at how departments such as Finance, HR, Customer Services and so on deliver services, it rather looks as if IT needs to catch up a bit. Going up a level, business itself is all about providing attractive and useful products and services to customers at the right cost, by adding value to raw materials and services from suppliers.
It really is services all the way down – a service an organisation wants to provide to its customers will be dependent on a hierarchy of internal services, some of which can be automated or supported by automation. The trick is to recognise that every time the word ‘service’ is being used it means (or it should mean) the same thing. If IT success is dependent on having the right conversations, a shared core principle that extends across all domains offers a good starting point.
Content Contributors: Jon Collins
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch