Desktop virtualisation is all about centralising the various aspects of client computing such as processing, data transfers and storage. Taking a previously highly distributed architecture and condensing it has the potential for a massive impact on both the network and storage infrastructure.
Before any discussion can take place on the impact of desktop virtualisation on networking and storage, it is vital to recognise that there is no “one true virtual desktop” solution. Instead, there are several distinct types, such as shared server, dedicated blades or streamed desktops or applications. Each of these caters for a different need, and has a distinct effect on networking and storage. Trying to optimise for all of these can be as big a challenge as getting to grips with managing the “normal” desktop estate.
First up is the impact on networks, which tend to be in place for many years and have reached a level of performance and reliability that mean they are largely invisible (even if the kneejerk response to many IT issues is to first blame the network). Networks have also developed into distinct architectures, with datacentre and campus environments catering for particular host types, applications and data flows.
Moving to a virtual desktop solution can have a major impact on the network. A classic case with VDI is that data flows that used to be between client PC in the campus network and server in the datacentre are now concentrated within the datacentre. This can require a refresh of datacentre networking and may render prior investment in the campus network obsolete.
Another issue is that desktop environments have become richer with uptake of content such as high definition sound and video. Far from cutting network traffic by moving to desktop virtualisation, this could begin to load up the network traffic again. And to throw more spanners in the works, with media content things like latency become important. Catering for this by implementing mechanisms like Quality of Service will up the cost and complexity of the network.
It’s often difficult to tell how things will evolve at the start of the project. Best practice in this area is still developing. There are a few predictive tools that can help with modelling to give a general steer, but often it is going to be about a phased rollout with monitoring and management to ensure things are going to plan. In many cases, scoping for a “worst case” scenario would be advisable given the difficulties and expense of upgrading the network should it become necessary. This makes highlighting the importance of the network and securing funding vital for long-term success.
Next up is storage, which is often a major unforeseen hidden cost behind desktop virtualisation. This is critical, because many desktop managers are not all that familiar with backend storage platforms.
In many desktop virtualisation solutions, there is a doubling up on storage. Disks remain in the client machines, but storage is also required for the backend infrastructure. Server storage is usually a lot more expensive than for the client, so unless kept under control this can quickly cause the cost of desktop virtualisation projects to spiral.
One of the traps to avoid falling into is assuming that it’s going to be possible to use an existing storage platform to handle the desktop virtualisation load and therefore avoid the investment and risk of acquiring a new storage platform purely for desktop virtualisation.
Much of the feedback we’ve had from people who’ve tried this has been that it ends up being a major problem. It may be possible to use shared storage, but will require significant testing to confirm it. In most cases, there will be a need to budget for implementing a dedicated virtual desktop storage platform.
The hit caused by having to invest in server grade storage can be steep, and therefore it pays to have a plan of action to keep this to a minimum. It doesn’t make sense to centralise client computing, and then have separate data stores for each PC. Therefore good control of the build and image process is needed.
Rather than storing thousands of individual images that eat up space, a few master images can be used, with differences applied dynamically to make up each specific client image. Also, de-duplication technology and compression can reduce the space consumed still further.
Above all though, investment in management tools and processes are critical. With desktop virtualisation, the workspace that users need to be productive becomes a service. Few users will accept a desktop virtualisation solution that is slow and unreliable, and this means that it needs to be proactively monitored to maintain service assurance.
CLICK HERE TO VIEW ORIGINAL PUBLISHED ON
Content Contributors: Andrew Buss
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management