If a datacentre were a person, who would it be? My thinking right now is Gordon Brown. Regardless of what you think of the man or his party, if asked to step back and consider things for a moment, most people might agree that he finds himself stuck all too often between a rock and a hard place. He tries to do the right thing and has it thrown back in his face. He simply can’t do right by everyone.
Such are the challenges of rushing with good will into each and every situation without taking into account the effects of making changes, without first considering the impact to “the system” as a whole.
Today’s datacentre, if it were sentient, may indeed feel like a beleaguered politician trying to please everyone at once. If you don’t like the political analogy, how about the idea of a sickly patient being poked and prodded by a never-ending procession of consultants, all with their own diagnoses and motivations?
Enough analogies. There are indeed many things in the datacentre we would change if given time, money and – power political and electrical. But before we charge in, now is the right time to take a step back and take a cool, calm look at what we are actually trying to achieve.
On the one hand, we have a multitude of things we could reassess: power consumption, cooling, building management. Then we have the guts – the stuff that does the work – servers, mainframes, networks. And finally we have the services we provide: applications, desktops, hosted services and so on.
On the other hand, we have “requirements”, the things the datacentre has to do to satisfy the needs of “customers” in whatever form they come, which is, after all, the sole purpose of the datacentre.
A simple view is that the datacentre provides computing services as dictated by a set of agreements pertaining to not least function, performance, availability, scalability, security and resilience. Anyone running a datacentre would readily acknowledge that a good idea would be to satisfy the demands of customers as efficiently and as cheaply as possible. This is simple common sense.
Currently, we have a somewhat perfect storm of choice and pressure to make everything much faster, cheaper and kinder to the planet. Individually, all the options open to us look absolutely fantastic on paper.
In practice, however, unless we take a hard and balanced look at all the choices we have and the business requirements we are trying to satisfy, there is a risk that a fragmented approach to datacentre improvement will simply create more silos, more management headache and relatively less business value.
Indeed, every business in the world wants to achieve more than it did last year. Everyone wants to exploit the promise of dynamic IT, and everyone would like to make less of an impact on the planet. But only in that order.
So where might we start sketching out a plan to exploit our IT resources – in this case our datacentres – more effectively while ticking as many boxes as possible?
The first place to start is the real constraints that exist. This could be floor space or power availability. These are fundamental metrics and without clarity here, there is little point planning anything else.
Then it is worth considering the different types of workloads that need to be executed. Do we have a mainframe? Does it run at peak capacity? (Probably not). Could it be used for some types of workload that are currently not even considered? (Very likely). Can we increase the utilisation and manageability of our server estate? If we do, what would be the impact on overall power, cooling and storage requirements?
The big risk, however, is that of rushing in and fixing things in isolation without first understanding the impact on other areas, such as storage, cooling and power consumption. The danger is of not making a net positive change and in reality just shifting the burden elsewhere, or worse, diminishing performance.
We should remember that the datacentre is already a green-ish entity by virtue of its existence: we don’t ship tonnes of information in paper form, or send people to install software, or have multiple physical buildings and people carrying out the same tasks for each and every branch or office. Nor do we execute as many processes via physical and mechanical means. Datacentres already remove huge volumes of physical work – and the associated carbon emissions.
Even so, we know we can do it better. Our research shows us time and again that IT isn’t broken, but it can be improved. The challenge is to do it in an orderly, sustainable and consistent fashion.
Looking forwards, we are likely to find we need more, bigger datacentres. A sensible trade-off is a small increase in the power consumption and carbon footprint of datacentres, offset by a greater reduction from other activities. This is the very point of IT: automation and the manipulation of data instead of physical entities.
To progress, to find the right balance for our businesses and the planet, it is important not to tackle in isolation the many issues we face and the choices on offer to us.
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch