In a nutshell
All-flash arrays are arguably coming of age, but in an early market, with lots of vendors jostling for position and making all kinds of promises, you need to be careful when evaluating options. While most of the historical challenges have been largely neutralised or at least made significantly smaller, there are still some uncertainties that need to be taken care of, so it’s important to seek the right kind of guarantees.
So why the buzz around all-flash arrays?
The use of flash storage in the datacentre has been evolving continuously over the last decade. Manufacturers began by incorporating a flash-based cache into traditional arrays to speed up data access. As prices crept down from the ‘eye-wateringly extortionate’ to simply the ‘extremely expensive’, vendors started to become more adventurous. This led to the emergence of full hybrid multi-tiered systems which elevated the role of flash to persistent storage, taking the form of a high performance tier of solid state disk sitting in the same box as a pool of high-capacity HDDs.
As prices edged-down more, and flash began to prove itself in an enterprise context, some vendors and their customers then took things a step further; they began populating hybrid arrays totally with flash media. Meanwhile, a bunch of upstart new vendors entered the market with storage arrays designed and optimised from the ground up with all-flash in mind. These were blisteringly fast, but often lacked some of the advanced features found in traditional disk and hybrid arrays. As they still also represented the expensive option, they were deployed for the handful of applications that really needed them, but were considered less suitable for general use.
Bringing us right up to date, we now commonly hear claims that today’s all-flash arrays offer performance, reliability and a proper enterprise-level feature-set, all while coming in at a price point that allows them to be used more broadly. It’s a nice high-level proposition, and it’s the source of much of the current industry buzz, but seasoned practitioners know the Devil is always in the detail. With this in mind, we recently ran an online survey during which 187 IT professionals provided their view.
Sanity check on the need
When it comes to drivers, the appeal of all-flash solutions still largely revolves around the needs of high performance applications, with VDI being acknowledged as a specific workload driving demand for some (Figure 1).
From a datacentre efficiency perspective, the above chart also tells us that many acknowledge the benefits of all-flash arrays in terms of reducing the pressure on facilities – floor space, power, cooling and so on. This stems from the fact that solid-state technology can be packed more densely into racks, has no moving parts, and you don’t have to play tricks to make it perform (like breaking your storage out across lots of small capacity spindles). Related to this is the reduction of management overhead. When all parts of the system are inherently low latency and high throughput, you don’t need to allocate a lot of time and expertise to tuning. A simpler architecture then means less administration effort in general.
The last point of note in terms of drivers is the notion of ‘a strategic drive to an all-flash datacentre’ coming out at the bottom of the list. This suggests that IT teams might be generally positive, but not quite to the extent of betting the farm just yet.
Potential brakes on progress
Some of the caution we are picking up is undoubtedly down to many not totally accepting that all of the historical concerns have been dealt with. The lingering perception of a high cost per capacity stands out prominently at the top of the list. Having said this, there is a clear acknowledgement by a sizeable proportion of our sample base that this depends on the vendor and solution, which by implication suggests that some suppliers at least have been moving in the right direction on this matter (Figure 2).
Most of the other concerns we see listed relate to uncertainties about the readiness of flash to deal with classic enterprise needs for robustness, durability, and predictability in areas such as performance and capacity. Against this backdrop, the results shown suggest that some suppliers are clearly guilty of – let’s just say – extreme optimism when it comes to making claims and promises. Again, however, the evidence is that supplier behaviour varies, so the lesson is to beware of snake oil sales reps.
When is a terabyte not a terabyte?
One thing to beware of when discussing specifications and pricing with suppliers around all-flash arrays is the genuine problem of figuring out what a unit of capacity quoted actually means. It’s rare for the absolute physical capacity of a flash module to be used as a metric, so direct comparisons with spinning disk are generally difficult because you’re not comparing like with like.
This is down to a number of factors, e.g. the system will set aside a certain percentage of physical space to deal with degradation of flash media over time. Then, however, you have to add back an even bigger chunk because very low latency allows you to do in line data reduction (dedupe and compression) without affecting performance. Put this together with the fact that you can push flash media to much higher levels of utilisation (because you don’t need to allow headroom to avoid fragmentation slowdown), and the ‘effective’ or ‘usable’ capacity is considerably higher than the number you started with.
The problem is that there are so many variables involved in all this – not least the exact type of flash media, the mix of data (which will affect compression ratios), and the fundamental design of the system as a whole (which determines the level of runtime optimisation). Trying to price storage sensibly in line with business value in a dynamic digital environment is bad enough already; throw the ‘flash factor’ into the mix and it’s no surprise that many see a lot of room for confusion and misalignment (Figure 3).
So how can you be sure that you will ultimately get the level of capacity you think you are paying for? And what about promises made in other areas, such as performance, multiple 9s availability, the durability of flash media, or even the lifetime of the entire system and the upgradability and maintenance fees you can expect as it gets older? Vendors frequently make claims on all such things during the buying cycle.
Put your money where your mouth is
With all of the uncertainty and doubt, many of those participating in our study would clearly like suppliers to back up their promises with firm guarantees (Figure 4).
What’s interesting about this chart, apart from the relative emphasis on different types of guarantee, is the fact that a relatively small percentage of respondents feel that any of the items on the list are unreasonable to expect.
From the customer perspective, this is totally understandable. Given that the all-flash array market is still relatively young, the majority of buyers will probably not have that much experience. Add to this the rate at which technology is changing at the moment – evolution of flash media, techniques for optimising its use, the design of overall systems, etc – and performing due diligence becomes extremely hard. Suppliers both know the truth about their technology, and have experience of many customers, so it doesn’t seem unreasonable at all that they should be expected to take some responsibility.
For any vendors reading this who are at this point starting to cringe, however, perhaps this next chart will focus your mind (Figure 5).
Put simply, those charged with specifying requirements and making investment cases in relation to storage have a hard enough job already convincing those with signoff authority to allocate and approve the necessary funding. If you want IT teams to take that first big step of going down the all-flash route rather than sticking with what they know, you have to help them by eliminating the perceived risk. The imperative here is even more critical if the customer was an early adopter of all-flash solutions and is sitting there looking at a previous investment which didn’t deliver on expectations and is essentially a dead end.
Bottom line
All-flash arrays have come on leaps and bounds over the last two or three years, and have a lot to offer in terms of both service level enhancement and reduced overheads on IT. And as demands continue to grow, and economics continue to reduce the differential between all-flash solutions and traditional systems, technology in this area will increasingly become a more sensible option. Right now, however, while the opportunity is there, the relatively early market that we’re currently in makes a degree of change and uncertainty inevitable. Against this background, when exploring investments in all-flash solutions, it’s as important to focus on the supplier and the deal they put on the table, as it is on the technology itself.
Dale is a co-founder of Freeform Dynamics, and today runs the company. As part of this, he oversees the organisation’s industry coverage and research agenda, which tracks technology trends and developments, along with IT-related buying behaviour among mainstream enterprises, SMBs and public sector organisations.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch