PCs and diversity go hand in hand, whether it’s how we choose for them to look, feel and perform, or what we expect them to do.
As well as business machines, they have become gaming machines, communicators and media centres. They have also become smaller and mobile. Part of the evolution enabling all this choice is the ongoing development of graphics acceleration technologies. They have already helped us leave behind the slow, 2D world in favour of a richer 3D, multimedia-driven experience.
It used to be that discrete cards with highly specialised graphics processing units (GPUs) were needed to enable such a rich experience. Increasingly faster cards appeared, first for 2D and then for 3D acceleration, and it appeared that faster would always be better.
However, different demands led to the introduction and widespread adoption of integrated graphics, offering what could (optimistically at times) be called “good enough” performance. This suited users who were after basic, price-competitive computing. It also worked for business IT, where driver stability and operational costs were more important than increased performance. It also suited mobile users after longer battery life.
So the graphics world split into two camps, with integrated graphics on the one side and discrete graphics on the other. But just as the market situation looked reasonably settled, yet more changes are coming that will alter the landscape.
Enter hybrid solutions. These combine the best of integrated and discrete graphics, and allow a choice between the two to be made depending on the task being executed. For example, if the task can be carried out in a low performance and power optimisation mode, then integrated graphics would be used. If the task required more horsepower, the discrete graphics element would kick in.
But why choose a hybrid system anyway? The answer is a bit convoluted, but really boils down to Moore’s Law and the Pareto Principle (otherwise known as the 80/20 rule). In this case the majority of the market will be happy with the performance, even if functions such as GPUs increasingly become part of the “platform”. We saw this as GPU functions moved into the chipset with the first wave of graphics integration. Meanwhile, Moore’s Law, which sees the number of transistors on a chip doubling every 18 months or so, sees tighter integration as the GPU moves onboard with the CPU, as the memory controller has done before. Inevitably, the PC platform will ship almost universally with an integrated GPU. The question becomes: How long will this take? Past history would suggest 18-24 months for mainstream markets.
Even if every PC ships with integrated graphics, it is far from game over for discrete GPUs. The performance of integrated GPUs will inevitably lag behind mainstream and performance-discrete GPUs, even though they will gradually improve with time. This is because they will have a limited power and transistor budget. Features and performance will be adequate for the majority of the use cases, but there will also be plenty of people and applications where a discrete GPU will be useful. The main change will be that, in this future, if a high performance GPU is required it will be in conjunction with an integrated GPU, forming a hybrid solution. Previously most systems would have used either integrated or discrete, but not both.
What having a universal integrated GPU brings to the table is a level of freedom for discrete GPU designers in the future. Discrete parts can work synergistically with the integrated GPU functionality, rather than replicating it. Today the technology has moved on from logging out and back in again to activate a change, to dynamically switching between GPUs. In future, both will work side by side with work divided between them simultaneously.
If it is a (near) certainty that integrated graphics will become ubiquitous, and hybrid solutions common, what advantages can be gained? For starters, by moving universally to a common, integrated graphics solution, it allows a baseline of common capabilities across the PC estate to be established. It will also allow across the board benefits such as power optimisation and long-term stability of platform and drivers.
Throw in new developments that rely on tight integration between CPU and GPU, such as remote desktop keyboard and video facilities on new Intel CPUs with integrated GPUs, and it begins to make sense to have integrated graphics in place whatever the graphics requirements. If users require this to be augmented with discrete graphics to provide them with the performance they desire, then it can be easily done with the new hybrid systems.
But probably one of the most important changes that hybrid graphics architectures enable will be to boost the uptake of GPU data processing, which is just coming into vogue. GPU computing has the potential to significantly alter the nature of high-performance applications using standard components. Both AMD (with ATi) and nVidia offer their own custom application environments for this. Apple’s supports OpenCL and Microsoft has DirectCompute for running applications on the GPU.
Processing on the GPU allows certain algorithms to be greatly accelerated. This may make creative types and engineers happy, but if it stops them doing other things at the same time adoption will be less than stellar. Hybrid GPU architectures may well help ease adoption as full use may be made of the discrete GPU for computation, while allowing the integrated GPU to provide an adequate level of graphics performance while this happens. The main issues will be the level of support that applications vendors give to GPU based computing. But if the application supports it, it may be that rather than increasing core count or memory, that a hybrid GPU solution is the optimal approach.
So that’s what we’ve been thinking about this new wave of graphics developments. Is this something you have been asked about or are playing with? And how will it impact your thoughts on provisioning users with new kit?
Content Contributors: Andrew Buss
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch