by Jon Collins, Tony Lock and Dale Vile
Most medium and large organisations need to run ‘compute-intensive’ applications of
some form. While High Performance Computing (HPC) is not new, it has traditionally been
seen as a specialist area – is it now geared up to meet more mainstream requirements?
EXECUTIVE SUMMARY
Compute-intensive application workloads are not industry-specific
Today’s computer systems are more powerful than ever. But also on the increase, are the needs of
medium and large businesses to run highly demanding workloads that make maximum use of
available computer power. Understandably, larger organisations have more requirements than
smaller organisations, and such workloads are more prevalent in certain verticals such as financial
services, telecoms and research. However the need is evident across the board.
Not all compute-intensive needs are currently being met
More often than not, such demanding workloads are being run in batch mode rather than
interactively, which cannot be ideal: smaller organisations (with sub-5,000 employees) in particular
tell us that their compute-intensive needs are not being met. Hurdles to solving this problem are not
only to do with finding sufficient time and resources, but also involve both existing applications and
current infrastructure, suggesting legacy issues are holding organisations back.
The gap is closing between specialist HPC and more mainstream, compute-intensive IT
While traditional HPC may have been about building custom compute platforms for specialised
applications, today’s HPC is not as isolated as many might think. Specialists no longer see HPC as
a separate domain; in addition, HPC is increasingly reliant on commodity equipment and software.
While the gap with mainstream computing may be closing, the journey is not over yet, as HPC
systems still require considerable customisation compared to general-purpose machines.
The HPC community has much to give in terms of skills and experience
Lessons learned in HPC environments are equally applicable in delivering infrastructure to support
more general compute-intensive workloads – for example, architecture and design skills around
networking and communications, power, cooling and so on. Indeed, the HPC community is better
placed than most to identify candidate workloads that could benefit from the HPC treatment –
candidates which might not be evident to those who are not HPC-savvy.
Meanwhile however, the evolution of HPC itself needs to accelerate
While demand for compute-intensive platforms may be high, the traditional supply chain for HPC is
not changing that fast. It may be that developments in other areas of IT, such as adoption of
virtualisation and cloud-based hosting models, may increase momentum in this area. In particular it
is generally agreed that automation and configuration tools are lacking – though this will inevitably
change as such models become more widely used.
This report is based on the findings of a research study completed in November
2009 in which feedback was gathered from 254 predominantly IT professionals with
direct or indirect experience of high end server computing environments. The report
was sponsored by Microsoft, though the study was designed, executed, analysed
and interpreted on a completely independent basis by Freeform Dynamics.
Content Contributors: Jon Collins, Tony Lock & Dale Vile
Dale is a co-founder of Freeform Dynamics, and today runs the company. As part of this, he oversees the organisation’s industry coverage and research agenda, which tracks technology trends and developments, along with IT-related buying behaviour among mainstream enterprises, SMBs and public sector organisations.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management