For as long as I have been in IT, the ‘right’ way to drive things has been towards openness and interoperability. Indeed over the past couple of decades we have seen some great leaps forward in both vendor cooperation and the development of relevant industry standards.
All of the great work around Web services, for example, has done much to ease the burden of systems integration. And when some vendors have gone against the flow, they have been chastised by customers and sometimes penalised by regulators.
Recently, however, I have been getting a bit concerned about this trend towards openness reversing. We have seen this in the mobile space, for example, with end-to-end closed solutions, locking together devices, operating systems, software provision and even content delivery under the tight and uncompromising control of one vendor.
The argument is that such behaviour is in the interests of stability, security, performance and an overall optimised user experience, but there can be no doubt that market control on the part of the vendor also has quite a bit to do with it, and that users and developers end up getting locked in along the way.
Meanwhile, over in the core enterprise IT domain, we have then seen the emergence of so called ‘appliances’, which are essentially black boxes usually designed to perform a single function such as content filtering, network load balancing, etc.
The idea here is to assemble a set of components that may be proprietary and/or heavily modified, then hard-wire them together so they work optimally as a highly tuned and robust single unit. Such products are sold and purchased on the premise that the user never wants or needs to know what’s going on inside the box. The solution is thus delivered and supported as a single entity.
This is great when dealing with highly specialised appliances and can yield benefits in the form of performance and simplicity, but there are dangers when the idea is taken too far in some other contexts, such as the delivery of ‘integrated stacks’.
The practice we are talking about here is based on stitching together hitherto discretely delivered components to form a platform or solution in which everything is pre-integrated and pre-optimised, sometimes generically, sometimes finely tuned to deal with a particular type of workload.
Typical components include things like server, storage and networking hardware, operating systems, database management systems, middleware and management tools, and even applications in some instances, such as comms and collaboration or business intelligence software.
The upside of this practice is that components are delivered as a coherent solution that is fit for purpose ‘out of the box’. This can potentially save customers a lot of time and expense, as even with open interfaces, effort is usually still required to assemble and test the platform or solution, and this ‘acceleration to value’ spirit is clearly behind some integrated stack propositions.
It is important to be wary, however, of vendors stepping over the line. If the message coming across is that the only way of getting the most out of the individual components included in the package is to use them together, then this should raise a flag from an interoperability perspective.
There is a big difference, for example, between saying that a particular RDBMS comes pre-optimised to run on a specific type of hardware when delivered as an integrated solution, and saying that the RDBMS will never run as well on alternative hardware, regardless of the effort you put in.
At the time of writing, integrated stacks offered by the likes of Oracle, Cisco, HP and IBM, enabled through various acquisitions and partnerships, are as yet largely unproven from a demand perspective. Freeform Dynamics research, however, has previously revealed that the customer appetite for bundled and pre-integrated offerings is significant in other areas such as application platforms.
It is also the case that some of the workload specific offerings optimised to deal with analytics, web applications, and so on, are quite compelling when you consider the expertise and effort necessary to achieve the same level of tuning through manual configuration. Bearing these factors in mind, it would therefore be reasonable to assume that this latest clutch of developments in the integrated stack space will tempt a significant number of organisations over time.
Given the dangers though, when evaluating options and looking for shortcuts to value, it is worth making an assessment of whether the vendor is coupling components so tightly that you would sacrifice significant functionality or performance, or incur significant overhead, if you ever tried to deviate from the formula in the future, which is almost the definition of lock-in. An ideal solution will lighten the integration and optimisation load without constraining your freedom.
But there is no right or wrong here, and you may perfectly legitimately be willing to compromise on openness and live with longer term risks and constraints to meet short term needs or objectives. The only real requirement is to think through what’s on offer and consider the implications of the decisions you are making to avoid unpleasant surprises down the line.
Dale is a co-founder of Freeform Dynamics, and today runs the company. As part of this, he oversees the organisation’s industry coverage and research agenda, which tracks technology trends and developments, along with IT-related buying behaviour among mainstream enterprises, SMBs and public sector organisations.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management