A few years back I was involved in a project that turned out far more interesting than I expected. The plan was to write a training course about a software development methodology. As you see, it did start from a reasonably low point in terms of interest – but it quickly evolved into a much more worthwhile exercise.
The course in question documented a Sun Microsystems internal approach known as “3DM”, or 3-dimensional methodology. For those familiar with the Rational Unified Process (RUP), it aimed to extend this into how to deploy applications in an appropriate manner to meet service criteria such as scalability, availability and so on.
In fact, like all good approaches, 3DM was not based on theory. Rather, it distilled down best practices learned by consultants in the field, around when and where to adopt clustering, load balancing, replication and failover, and other such constructs.
It’s all good stuff, and the general lesson is that good practice is out there. This isn’t the place to document the whys and wherefores, not only because the truth is ‘out there’ but also because it tends to depend on the hardware and software involved.
But surely, says the outsider, IT is going to be more about virtualised machines running modular applications on industry-standard servers? Doesn’t that mean that IT gets simpler and simpler, reducing any dependency on good design?
Sadly, no. Despite the suggestion from some quarters that IT is getting simpler and simpler, the need for skills to build reliable, scalable systems is as prevalent today as it ever was.
Good systems design always was, and still remains, a constant battle between the theoretically possible and the actually practical. Today’s IT systems can be impossibly complex, running layer upon layer of barely compatible software, linking together older and newer systems that were never meant to be linked. From the outside in, IT may look like a Ford Mondeo – standardised to the point of being impossibly dull. To the engineers working on the inside however, IT is more like a Morgan, with each individual part, each connection and configuration item custom made.
As a result we can bandy around terms like ‘failover’ regardless of their actual practicality. While failover is nominally about taking a single workload and getting it up and running on another server, but in reality there is often a complex web of dependencies between server, network and storage hardware that can be difficult to unravel, never mind replicate. Are both servers (source and target) identically configured? Do they share the same network connectivity with the storage? Is the storage itself configured correctly to support the application in question? And so on and so on.
Virtualisation may help answer some of these questions of course, but only if it is considered architecturally, which brings us to the nub of the matter. Part of the challenge is that we’re not building in good service by design – it’s just not being costed into the business cases for new systems and applications, as we’ve seen in several research studies. As a result, such things as failover have to be bolted onto systems and applications after the event, rather than being built in from the start.
As with many complex problems, the temptation might be to offload the complexity onto a third party, which is one reason perhaps why the interest in hosted services is growing.
However, unless you have worked out a way to offload the more complex stuff, you might just be adding to the problem. In the outsourcing wave, many reported the issue of outsourcing the best guys, leaving the dross behind (in some cases, to run the contracts). Might we end up with a similar issue with third party hosting, in that the easier systems will migrate, leaving the complexity behind, and creating an integration challenge that now runs across the firewall?
One thing’s for sure. If we are going to achieve any state of IT nirvana any time soon, some pretty fundamental shifts are going to be required in terms of the role of good design. Clearly best practice exists, if we choose to take it up and work through the short term pain and additional costs required to get things onto a firmer footing.
Or perhaps the only realistic option is to keep going with the candle-wax and string, patching things together as they go wrong and adding new layers of technology and complexity on a regular basis. Perhaps we secretly prefer it this way – just as a Morgan may suffer from the foibles of being custom built, the design itself taking into account the subsequent need to be tinkered with. To change this would require a major shift in mindsets and behaviours at all levels, without which it is difficult to see how any nirvana state of service delivery could ever be achieved. If you feel any different, do tell.
Content Contributors: Jon Collins
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch