In a nutshell Deciding which direction to take with your storage purchasing is a tough call, especially since the advent of hyper-converged storage which adds compute and network capabilities for a one-stop-shop. Some would have you believe that ‘traditional’ storage is dead, or at least on its last legs, and that hyper-converged is the future. Meanwhile, others are still evolving and developing those traditional storage arrays and subsystems, typically with a strong emphasis on flash storage over spinning disk. How black and white is the choice, what are the key factors to think about when making it, and is there a third way to be found? The hyper-active storage pitch Reading through articles in the press and the marketing pitches pumped out by the big technology vendors, you could be forgiven for thinking that traditional storage – for which, read SANs, storage arrays, NAS and more – is already obsolete, and that hyper-converged storage based on commodity appliance hardware is the ‘One True Way Forward’. There are elements of truth in the story, of course, but overall it is as unlikely as the often-predicted death of tape: sure, tape is no longer most people’s preferred choice for system backup, but it is still selling very nicely thank you, especially into large but niche applications where media cost, portability, longevity and capacity are paramount. This is even more the case for traditional storage, which is still king for many enterprise applications and easily the most widely deployed means of holding most forms of enterprise data. This is for a variety of reasons which, while they might not be as fashionable as hyper-convergence, are well rooted in the immediate needs and capabilities of the business. For example:
- You already have the management skills for traditional storage
- Traditional storage systems are mature and ‘enterprise-grade’
- The inclusion of enterprise flash can answer almost all performance issues
- The cost and difficulty of migrating existing applications onto hyper-converged systems may outweigh the eventual benefits, or create additional risk
- New storage architectures may also need your IT group to be restructured
- You are not sure HCI is mature enough for mainstream enterprise use
- You don’t see a huge need for change anyway (if it isn’t broken, don’t fix it).
In addition, hyper-converged storage is not just storage: although its essential foundation is a layer of software-defined storage, the hyper element comes from it also having full compute and networking layers, which is why it is also called hyper-converged infrastructure (HCI). The whole thing is then brought together under a single management framework and packaged, sold and operated as a unified system – a data centre in a box, if you like. Adopting hyper-converged storage may, therefore, have ramifications far beyond your storage. ROBO, VMs and private cloud The first workloads where hyper-converged storage – at least, as it exists today, because this is a young and continually evolving sector – has so far enjoyed most traction are solutions for ROBO (remote office, branch office) locations, along with VM hosting and desktop virtualisation. In particular, the latter cases are often set up in a cloud-like manner that resemble being a private service provider. Conversely, if you don’t need a private cloud, or you are not making extensive use of virtual machines (VMs) or related technologies such as virtual desktops (VDI, or virtual desktop infrastructure), then you probably don’t want the cost and potential disruption of migrating to HCI – at least, not yet. Similarly, while HCI can be a good fit for small outposts of a larger organisation, the evidence so far is that few HCI solutions are yet built both small and simple enough for smaller business; however, it is fair to assume this will happen in time. If you do run lots of VMs, there are clearly advantages to be had from hyper-converged storage if it is done right, particularly in terms of easier acquisition and setup combined with simple management. HCI solutions are also beginning to be deployed to support a broader range of enterprise workloads, especially where the emphasis is on flexibility of virtualised infrastructure – whether public or private – to be able to deploy new applications rapidly, in minutes rather than days, and then scaled up and down automatically in response to their changing workload demands. The management simplicity of HCI solutions and the ability to automate functions is particularly attractive in scenarios where ‘change’ is the norm rather than the exception. So this leads to the question of just how different is HCI compared to traditional storage? HCI concepts Let’s take a few minutes to consider some of the buzz words and concepts that you are likely to come across when looking at HCI storage. HCI architectures In the context of HCI, convergence is really about the ability to virtualise and abstract your compute, network and storage resources, then pooling them and centralising their management. Hyper-convergence can be delivered in one of two fundamental architectures: HCI appliances that are built using modular building blocks for simple scalability and are fully packaged integrated systems ready to be plugged in and go. HCI software that is loaded onto ‘industry standard’ servers to build the HCI system. Such solutions often come with a hardware compatibility list that defines, explicitly or implicitly, which hardware components are supported. The two approaches can deliver very similar capabilities, but have distinct advantages and disadvantages. Prebuilt HCI appliances come pre-assembled and fully tested and, importantly, usually have support supplied directly by the vendor. This makes acquisition of such solutions and their ongoing operational management relatively straightforward. Better offerings are also usually tightly controlled by the supplier to ensure that all the components work well together without inducing any unexpected performance bottlenecks. HCI appliances may also have administration tools that can monitor and manage the hardware effectively rather than stopping at the HCI software level, another factor that may be attractive in certain scenarios. The major drawback can be that the appliance supplier may have a limited range of modules to choose from. Software only solutions may support a broader range of physical hardware to be used in the HCI solution built, but you then may have to deal with multiple suppliers should operational maintenance issues occur. Equally, given that such systems are less tightly integrated, the overall performance may not be as consistent as can be delivered by more tightly integrated appliances. Storage abstraction and management A key characteristic of HCI solutions is that abstracting the resources / services from the physical hardware makes them both flexible and fungible. For example, server virtualisation tools allow a VM to be deployed in minutes, then moved from one physical machine to another or replicated whole for disaster recovery. Storage virtualisation delivers similar flexibility for storage – it speeds provisioning, boosts utilisation and means that data is no longer tied to a specific disk or disks. Perhaps more importantly, because these day-to-day operational processes are essentially just software driven – which is to say they are software-defined – with the right management tools included, they can be made simpler to operate or even automated. The latter is of course why the mega-scale cloud providers such as Amazon, Google and Microsoft are huge designers and users of hyper-converged infrastructure. All of that means that hyper-converged storage has a lot going for it, and may well be desirable for newer web scale applications -, say, or where you are planning a private cloud type of environment. It certainly complements a heavily virtualised server setup, which is why a virtualised storage layer is an essential part of any ‘data centre in a box’ HCI appliance. HCI capabilities and characteristics Hyper-converged infrastructure can be summarised thus:
- Everything is virtual and abstracted for simpler or even automated management
- The underlying computer, storage and network resources are pooled and shared
- The focus is on ‘service’, be it applications, VMs or virtual desktops, rather than on the hardware components of the platform
- You can build cloud-like and scalable services
- A well designed and supported HCI appliance can be a data centre in a box for small and mid-size organisations
- A virtualised and abstracted storage layer is a necessary underpinning.
So where does this leave investments in traditional storage? In the short term they are getting on with the job at hand. In the longer term, new converged storage platforms also support traditional access schemes so can be used for both. Examples might be Ceph and OpenStack. Better still, several hyper-converged storage platforms also allow you to bring in existing server storage (direct attached) and separate storage systems, virtualising them and adding their capacity to the pool. Hyper-challenges At the moment you may have different teams doing different things, each team with its own skill-set and evolutionary path – and perhaps more importantly, its own power-base. In many larger organisations there will be server admins specialising in servers, VM admins in virtual machines and storage admins who know all about capacity provisioning, array configuration, NAS management and so on. Hyper-convergence could change that. As resource pools converge, ownerships will change, skill-sets must adapt (with generalists taking over more of the routine ops work and specialists instead focusing on areas such as architecture and implementation, especially for those edge cases where low latency and/or high performance are essential), and those power-bases may come into conflict. It is possible that you might see turf wars between your storage specialists and your VMware specialists, for example, or that the virtualisation team will need extra resource in order to handle the extra workload. Fortunately, while the move to HCI may be inevitable for most organisations for a range of workloads, it is not unavoidable in the short-term, so there is time to plan. In addition, there are few actual standards for HCI. Almost every HCI supplier has its own software and its own underlying software-defined storage (SDS) framework, and often its own physical building blocks or list of supported hardware. Some of them use the same file systems, but that is not enough for interoperability or multivendor clusters. True commodity-based converged infrastructure is a step above the majority of the turnkey hyper-converged storage world, and at scale is likely to require something more along the lines of an OpenStack or CloudStack deployment. Plus, HCI is changing and evolving. It started out in edge use-cases (such as heavy VM usage) and is growing to be suitable for more and more mainstream workloads. But there will always be (opposite) edge cases where it doesn’t fit, and where traditional storage architectures remain the preferred option. Extreme examples might be highly latency-sensitive applications where there is simply no room for anything software-defined, such as high-performance computing. What to think about next The first thing, as ever, is to do a thorough inventory. What storage have you got now, what applications is it supporting, and are they on the most appropriate infrastructure? Where are there opportunities to take advantage of HCI, and what will keep you on the traditional storage route for the foreseeable future? The second is to keep aware of developments in HCI, software-defined storage and data centre convergence (in the wider sense of virtualising and pooling the data centre layers so they can be managed together and automated). The future is increasingly software-defined. As more and more business applications transfer to running on VMs, converged management and orchestration tools will allow more and more to be automated via software. Third, assuming you do see potential in HCI, you need to get an idea of the likely impact that adopting such systems could have on day-to-day operations in your computer room or data centre. While such investigations tend to be dominated by technical capabilities and characteristics, and how well the technology meets your needs, it is also essential to consider how well new technologies will fit within your overall IT and operations landscape. And fourth, as well as considering whether a system has enough resource to run your workloads, you need to look at whether the modules will work robustly in your environment and whether it has the scalability and flexibility you will require – does it scale uniformly, say, or can you expand its compute, network and storage elements at different rates? Also essential are good management capabilities, so if you need to run multiple workloads each with different characteristics, for example, is the level of granularity sufficient to build VMs with the right resources for the task? The bottom line Software-defined storage – of which hyper-converged storage is, in many ways, a special instance – has a lot going for it, but it is unlikely to replace all traditional storage technology in the medium term. So invest in what your business needs now, but with an eye open on your storage strategy and plan for the future, depending on specific considerations and use cases. And if, as is very likely, you do go on buying traditional storage, do so with future usage in mind and, if possible, with plans for how new solutions may be incorporated. Is your storage vendor likely to cut you off and offer an expensive forklift upgrade to HCI, or do they have a path for convergence and coexistence? Check too the software options – as we have discussed, HCI is essentially a software technology, and layering the right software over your existing enterprise storage could well be another valid route to convergence. The likelihood is that you will end up running traditional storage and HCI storage side by side for a considerable time. You must therefore ensure you have storage platforms that meet your needs today and probable demands for tomorrow. But more importantly you need to have management tools and operational processes able to work efficiently and effectively in the evolving software defined world. This also means you have to ensure your suppliers and partners are ready and able to step up to the standards you set.
Bryan Betts is sadly no longer with us. He worked as an analyst at Freeform Dynamics between July 2016 and February 2024, when he tragically passed away following an unexpected illness. We are proud to continue to host Bryan’s work as a tribute to his great contribution to the IT industry.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management