By Freeform Dynamics
Our analysts recently took part in an expert clinic looking at some of the most common questions and challenges that arise as organisations look to modernise and transform their data centre environment. This paper outlines the questions thrown at them and their responses.
What’s the best way to introduce modern architectures and best practices?
It is a standing joke in IT that when asked how things can be moved forwards towards the promised nirvana of tomorrow the answer has always been, “well I wouldn’t start from here”. But just dropping all so called ‘legacy’ systems into the bin and starting again is rarely an option.
Indeed, last December we asked how IT professionals saw their data centres evolving going forwards. This is a question we have asked regularly over the past few years, and the answers tend to be pretty consistent whatever the context
Unsurprisingly, very few see their organisation re-architecting things via a single transformational initiative to modernise things across the board. Many more expect to build a modern environment for new projects and leave existing systems alone. At the other end of the spectrum significant numbers expect that nothing will change for the foreseeable future or that the adoption of modern architectures and tools will only creep in ad hoc, piece by piece.
But the most common approach, by over forty per cent of enterprise respondents, envisages that organisations will build a modern environment for their new services to which existing systems will be migrated in an incremental fashion.
It is important to recognise that this methodology has proved its worth in many research projects we have undertaken over the years. When we have looked at new solution tactics, it is clear the approach of establishing a beachhead and then expanding usage over time is recommended by successful early adopters. The big bang migration of everything is usually too expensive or risky for all but the most confident and financially sound organisations.
When it comes to building the beachhead, it is essential to ensure you are building something ‘complete’, i.e. a solution with all the elements in place, but on a small and manageable scale. This is different, for example, to building an initial environment that just picks off part of the problem – e.g. advanced server management, ignoring storage and networking.
Some respondents suggest pulling the best storage, networking, server, and applications guys into a virtual team to set things up. This has the added benefit that from the outset the ‘beachhead’ is seen to be an integrated initiative, not a solution favouring the interests or priorities of one particular domain.
As everyone starts from different places and has diverse objectives, there is no methodology guaranteed to be the best way to introduce modern architectures and best practices. Perhaps the best advice is to make sure that you really have a complete picture of your starting point. It is still surprising how few organisations are confident they possess accurate information about the services which are deployed, who is using them and for what business purposes.
Even harder to understand is that even basic inventory data on what physical systems are being run are anything but up to date in many organisations. So before moving forwards, make sure you know where your journey will be starting.
What’s the role of integrated stacks, i.e. solutions in which compute, networking, storage and management are vertically integrated? How do you avoid lock-in with these?
The delivery of IT applications or services requires a number of elements to come together. While some solutions may need specialised kit in order to work, we can typically distil the required elements down to a common core of compute, data storage and communications. Mostly, we tend to think of these as the physical boxes of servers, disks and networking.
In the past, these were often highly integrated, such as the IBM mainframe which was a complete system end to end, with processing, storage, networking and dedicated client terminals and peripherals. This brought benefits such as reliability, scalability and manageability but also posed challenges such as the high cost of purchase and a certain lack of flexibility to adapt to new requirements.
With the arrival of minicomputers and then the workgroup server, things opened up in the IT infrastructure. Cheaper, PC based servers became popular and boosted the trend of scaling out rather than up. Difficulties with managing a lot of locally attached storage in these servers led to dedicated storage servers and the rapid evolution of the storage area network. Meanwhile, connectivity settled on Ethernet and TCP/IP which moved from small, shared hub based networks to large scale switched networks with highly configurable management.
The upshot of all this has been a proliferation of devices and solutions at each level of the IT infrastructure stack giving a vast amount of choice in terms of vendor, performance, features and price. This has been great for flexibility, but our research shows that a by-product of this is a tendency for fragmentation and isolation between the servers, storage and networking which hinders the ability to adapt to change effectively and makes consistent and coordinated management that much harder to achieve. This is often an aspect that is overlooked in terms of its impact on normal operations as well as the ability to be agile.
This is where the concept of the vertical stack is making a resurgence. Like the mainframe, compute, storage and networking are bought together and closely integrated, with comprehensive management and orchestration to enable a more seamless approach to management and configuration, boosting agility.
The common view is that of a single vendor solution, such as the HP Cloud Matrix, IBM PureSystems or Dell vStart. But it also applies to packaged solutions encompassing multiple vendors such as the VCE vBlock or Cisco/NetApp FlexPod.
These vertically integrated stacks are engineered to work together, but this should not require exclusivity in order to function. The key approach here is that if the solution meets requirements the standard hardware can be used, but if there are other needs then third party kit should be able to be plumbed in.
To avoid lock in, customers should look for standardised and open interfaces at each layer of the stack, with the ability to swap out kit at any level and replace with an alternative, or to be able to plug in third party equipment as needed. This is not to say that no work is required – some integration will almost certainly be needed. But by insisting on common interfaces and standard integration packs, the aim is to keep this to a reasonable minimum
What, exactly, is the ‘cloud OS’ or ‘cloud operating environment’, and why do I need it?
The answer to this question is a lot more meaningful if we first consider the context. In a recent study, participants highlighted a range of issues, mostly to do with fragmentation and disjoints, that stood in the way of achieving optimum delivery of IT services. One option put to them was to deal with this via a wholesale move to a more consistent public cloud environment, but this was generally rejected as being neither practical nor desirable.
Instead, when respondents were asked about their ideal environment, they envisaged a new style of data centre that allowed a lot more freedom and flexibility in the way resources were managed and allocated to applications and other workloads. Internal compute and storage capacity would be organised into pools, creating private clouds, where new applications could be provisioned rapidly without the traditional hassles and delays associated with dedicated system stacks.
Thereafter, dynamic workload management and orchestration would enable reallocation of resources on the fly, even automatically, to deal with changes in demand. And while the wholesale move to public cloud was rejected, many told us that blending external cloud services into the mix, almost to act as a virtual extension of the physical data centre, would be an important part of the equation. The idea was that applications would be placed wherever was most cost-effective, based on business and technical requirements and constraints. In addition, applications would never hit a resource wall, but nor would they hog capacity that they weren’t currently using.
All of this would in turn allow the emphasis to be switched from managing the operation of systems, to the management of business service delivery, which is something most IT organisations aspire to, but few, to date, have pulled off in an effective manner.
To make all this work requires a new kind of operating environment, one that helps to manage virtual resources – network and storage, as well as compute – in a coherent, flexible and scalable manner. This includes both internal infrastructure and external services, with security, information management, service level and other types of policy managed centrally and applied consistently across the on-premise/cloud continuum.
The ‘Cloud OS’ is a term that has emerged in some circles to refer to the software layer that enables this operating environment to be created.
While some IT vendors can provide all or most of what is required in terms of server operating systems, hypervisors, cluster controllers, orchestration middleware, management tooling, and so on, it’s important to appreciate that the Cloud OS is not something you buy as a single product, but something you create or assemble. The vendor (or vendors) will provide you with the technology you need, which now exists to deliver on pretty much all of the aspects of the vision we have been discussing, but it’s up to you to deploy it appropriately.
In this respect, it sometimes helps to think in terms of creating *your* cloud operating system, and to appreciate that you may need to make adjustments to the way you work and even the way you are organised to get the most from it, but that’s another discussion.
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management