In business, the phrase ‘change is constant’ may be a tad overused but it is nevertheless true in many regards. The response from IT to this change is (ideally) to be able to change as the demands on it do. The need for change may filter through from the business as a response to an external event, but we also instigate change independently when seeking to improve or streamline the way we do things. Drivers from both these domains have stimulated software and infrastructure evolution over the last few years: the economy and intense levels of competition on one hand, and technologies such as virtualisation helping businesses improve their capabilities beyond ‘just’ financial benefits, on the other.
For IT shops aiming to tune the efficiency and effectiveness of the services they provide to the business, a natural consequence is to maintain a keen eye on the relevance and value of those services so as to be able to make the right decisions about how and when to retire, replace or modernise them.
Regardless of the motivation change, however, a few important considerations are worth bearing in mind. To be blunt, it’s the ‘I’ in IT that really matters to our business, because without it, even with all the process, workflow and software we can’t run our businesses. Given this, it’s relatively easy to think about how the value of data and the obligations we are under to protect it and maximise its value, can (or should) be used to help steer changes in areas like software rationalisation and consolidation.
In a software rationalisation exercise, ‘just’ ensuring that information assets are protected on their journey from an old application to a new one might be a wasted opportunity. Business advantage is to be gained from the data that emerges from a rationalisation process being slicker, cleaner, and of higher value than the data that goes in. Ultimately, ‘input’ to the rationalisation process is a set of data repositories, databases and so on. All these can be treated, for example, if data is fragmented and/or duplicated across multiple repositories, inconsistent or contradictory, difficult to access, formatted in ways that need translation or stored or managed sub-optimally.
Essentially, the rationalisation process offers an opportunity to deal with these issues, but its only worth doing if there is some positive business impact to be gained, such as streamlining any new processes which may involve exploiting the data, reducing the cost of compliance, or moving towards a ‘single version of the truth’ – a goal that any business will aspire to, but rarely gets the chance to pursue.
To do any of this properly we need a combination of the right approach and the right technology. The former requires us to firstly understand the data impact of applications to be retired – for example, do we actually need to treat and migrate all the data from the old application to the new one, or can we ascertain that some of it is of much less value to the business and simply archive it?
Building a picture of the ‘data under rationalisation’ and the needs of the target application architecture is also important, as is making an assessment of migration risks; for example what would the implication be of personnel not being able to access the data for a period of time should something go wrong?
Plenty of tools are available to help manage and streamline data migration processes, including archiving and retrieval software to ensure that data is moved to an accessible location and assigned appropriate indices to aid discovery. Data movement and migration tools can ensure the integrity of data (e.g. preservation of references) so it can be relied upon by the new applications, and data cleansing and quality management tools can really drive the value to be gained from the process itself, because it’s an ideal time to deal with issues of ‘dirty data’.
Needless to say, we’ll always need to update, then retire the software applications we use, but given the rate of change around data protection laws, our responsibilities to ‘do the right thing’ with data will see the data itself easily outliving the systems it runs on. Taking a data centric approach to software rationalisation may feel a bit like making work for yourself, but ultimately its part of taking responsibility for what really matters to the business.
Content Contributors: Martin Atherton
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch