While your storage infrastructure may be meeting current needs, you will almost certainly be investing in it in one way or another as demands escalate and new requirements emerge. The question is, should you upgrade or extend your existing systems, or implement new solutions to meet new needs? There’s no simple answer, so the imperative is to understand your options and make use of your suppliers.
The journey to storage perfection
Storage technology has developed rapidly over the course of the last five years providing enterprises with an expanding range of solutions to meet ever-evolving storage requirements. But the diversity of options also poses its own challenges, notably, how do you build storage solutions to handle genuinely new workloads, and should this approach differ from how you evolve your existing infrastructure? When there is a new storage demand can it be integrated into your existing platforms or would it be better to build new architectures to which you then migrate existing workloads over time?
Alas, there’s no one single answer to building the perfect storage environment. Rather, there are likely to be several options depending on where you start from and how well you can predict your likely future requirements. Few organisations have a completely greenfield storage environment which can be built from scratch, but some may have greenfield projects within the existing IT infrastructure.
Working out how to build the future storage architecture takes planning and a long term commitment if the creation of difficult-to-administer-and-secure storage environments are to be avoided.
Storage technology evolution
When we start to explore what’s behind the recent advances in storage, it’s clear that two things are going on. Sometimes it’s a case of evolving business requirements driving the need for different approaches, while on other occasions emerging solutions are in themselves creating new opportunities and use cases. With this in mind, let’s take a look at how things are developing.
Pressures on storage
The pressures on storage are well known and widely experienced by organisations of all sizes, and it’s not just incessant data growth. Business expectations of ever faster performance, higher availability and improved disaster recovery capabilities are now the norm, along with the clear requirement to secure data effectively.
Furthermore, there is the challenge that business managers increasingly expect new applications and services to be delivered rapidly. This means the underlying storage systems must be able to flex quickly and efficiently as demands require.
But most organisations have storage infrastructures composed of both old and new equipment, often with multiple silos, each supporting particular applications and services. Modifying such systems is complex as very few organisations have the resources, both human and financial, to update everything in a big bang refresh.
More importantly, few have the courage and risk appetite to do so, as the issues around migrating data between old systems and new are well understood, and the scars such projects have left in the past burn deep.
Evolution of storage solutions in recent times
Until very recently the majority of organisations built storage systems comprised of Direct Attached Storage Devices (DASD) connected to servers, Network Attached Storage (NAS), and/or Storage Area Networks (SANs). In many, if not most, environments, tape solutions are also deployed, either dedicated to individual storage systems or as a central shared facility.
These traditional systems have worked effectively in the past, and often continue to serve immediate needs, but they are increasingly falling short as the demands for better performance impact more and more applications and services. Together with difficulties meeting the abovementioned flexibility requirements, both commercial vendors and the open source community have been encouraged to come up with ways of enhancing existing storage solutions, as well as developing new ones.
Let’s take a look at some of the specifics.
Recent storage developments
It is beyond the scope of this paper to document all technical developments that have taken place recently in the storage arena, but focusing on some of the prominent ones, recent research highlights how more modern and advanced solutions are finding their way into storage landscapes to sit alongside traditional technologies (Figure 1).
Click on chart to enlarge
Figure 1
The options you see listed at the bottom of this chart are worthy of more discussion.
Flash storage / SSD
One of the most visible and significant developments in storage is the emergence and maturing of flash and solid state disks (SSDs). The technology here is designed to deliver significantly faster performance and lower latency than that associated with hard disk drives. As illustrated on the above chart, flash is widely regarded as the most rapidly accelerating phenomenon in the storage world.
But flash storage solutions are available in many formats. The two most widely deployed solutions take the form of ‘all flash arrays’ populated entirely with flash disks, or ‘hybrid arrays’, in which flash is combined with spinning hard disks. In the case of the latter, software controls the automatic placement of data onto the most appropriate resource. A third option, though currently less widely deployed, occurs where flash storage is directly hosted inside the server via a high-speed PCI Express (PCIe) interface.
Flash has traditionally been regarded as an ‘expensive high performance’ option, but this is changing as the technology has continued to mature; solutions today are capable of delivering good performance at a good price.
Storage virtualisation and SDS
Two other areas are slowly becoming important, albeit at a slower rate – namely storage virtualisation and software defined storage (SDS).
Storage virtualisation, similar in many respects to server virtualisation, involves the use of a software layer to essentially divorce the storage hardware from the services accessing it. The end result is often the creation of a single logical storage pool from multiple networked storage devices. Storage virtualisation solutions usually provide central management of all networked storage devices, with some also providing additional functions such as data replication, point in time copies, etc.
SDS offerings deliver similar capabilities, and certain vendors do use both terms interchangeably even though SDS and storage virtualisation solutions are different. Conceptually SDS solutions seek to remove core functionality from the hardware layer, and build it into generic software that can run on top of any supported platform. Examples of the kind of functions we are talking about here include deduplication, replication, snapshotting compression, thin-provisioning, etc.
A significant challenge associated with SDS, and to a lesser degree storage virtualisation, is the question of where storage functionality is better exploited. Should it reside in the SDS software itself or in the underlying hardware (if it has such capabilities natively resident)? Often there is no simple answer as much depends on the nature of the workloads and the platforms being utilised.
SDS solutions are relatively new to commercial markets and their uptake is only now picking up, but such solutions are often deployed in conjunction with two other young storage offerings, scale-out storage and object storage.
Scale-out storage
Scale-out storage is based on the concept of building solutions that can easily grow in terms of capacity and throughput as requirements change. They do this by tackling two common challenges – firstly, the difficulty of adding capacity to legacy storage platforms without risking service interruption or data compromise, and secondly, the issue of keeping up with escalating performance requirements and expectations.
In the past, it was quite common that the only way to improve the performance of storage systems was to add more disks in order to spread the throughput and alleviate I/O bottlenecks (the most common cause of performance problems). The problem was that you could only do this so far before you reached other limits of the platform, e.g. in relation to the processing power of its controllers.
Scale-out solutions take a different approach. They combine sophisticated management software with x86 based hardware to provide self-contained ‘building blocks’, containing a matched set of storage, I/O and processing capabilities. These are introduced as needed to provide additional capacity or improve performance, in the knowledge that by expanding the system, no one resource is going to ‘max out’.
These solutions are already widely deployed by service providers, especially those operating at very large scale, but they are also now starting to be adopted in a mainstream environment. Commercial vendors packaging innovative software from the open source community to produce enterprise grade scale-out solutions represent a key development here.
Object storage
The last ‘new kid on the block’ (if you’ll forgive the pun) that we will cover here is ‘object storage’. Like scale-out systems it is also a solution that enjoys good deployment in service provider environments. To place it on the storage map, you can think of it as being a relatively new addition to its better known cousins, ‘file’ and ‘block’, as a way to address, access and manipulate data.
The object storage approach allows systems to be built that can achieve great scale using inexpensive hardware and which have significant self-healing capabilities. As a consequence these systems have been used by service providers to hold very large volumes of unstructured data such as those associated with large scale web services. It is probably also worth noting that object storage works hand-in-hand with some of the other concepts we have discussed, e.g. an increasingly common configuration involves file and block access layers sitting on top of an object store, with the whole system underpinned by a scale-out architecture.
While use in enterprises is still relatively low, object storage is already finding its place as a foundation for archive systems that facilitate the efficient, long term, high volume storage of data without sacrificing dynamic access and online search and retrieval capabilities. As time goes on, we expect object storage technology use to increase in relation to areas such as digital customer engagement (think multi-media content) and the building of enterprise private clouds.
Things to think about when evaluating options
The choice of solutions available to store data has never been more diverse but this raises a number of factors that must be considered when looking at what to adopt and how to integrate things into the existing storage environment.
An important starting point is understanding the potential use cases that exist in your environment for each technology. What workloads do you currently have in place, what service characteristics do they need to have, and how do you expect these to vary over the months and years ahead? The next step is to look at your existing platform base plus the skills you have in place (or are available via suppliers and partners), along with the operational processes you employ.
Finally it all comes down to how comfortable you are in the ability of each solution you are considering to meet future needs, and this applies to both your existing portfolio as well as new solutions. Sometimes maintaining the ‘status quo’ is actually the most risky and costly option. Having said this, if you are looking at new technologies, it is essential that you do your due diligence and make sure you are confident with the solutions themselves and the ability of suppliers to support them.
The bottom line
Few organisations have a greenfield site on which a storage architecture can be built from the ground up. Existing NAS, SAN and DASD storage systems are not going to disappear; it is rather the case that new platforms will be added into the complex mix you probably already have in place. The trick is to identify which new solutions provide the best answer, not just to the business and IT challenges you face now, but also to those you are likely to come up against within the lifetime of any systems in which you are investing. This principle applies whether the investment is in the form of an existing system extension or upgrade, or adoption of a whole new solution.
It is important to remember that generally speaking the technologies we have discussed in this paper are not mutually exclusive, indeed they are often more (cost) effective when combined together. Such examples are easy to spot, with flash storage often utilised in scale-out systems to provide a performance tier, and scale-out solutions often employed in SDS landscapes. The challenge as always is working out how to implement new storage technologies alongside existing platforms to allow both to be better utilised. Again, take advantage of your suppliers, who are usually more than willing to educate you on the options available and help you to evaluate which solution might be most appropriate for a particular need.
Tony is an IT operations guru. As an ex-IT manager with an insatiable thirst for knowledge, his extensive vendor briefing agenda makes him one of the most well informed analysts in the industry, particularly on the diversity of solutions and approaches available to tackle key operational requirements. If you are a vendor talking about a new offering, be very careful about describing it to Tony as ‘unique’, because if it isn’t, he’ll probably know.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch