To date, much of the outside attention on virtualisation has fallen on the infrastructure side of the equation – that is, what’s the impact of adding such a software layer to physical machines etc. etc. But as the thinking moves beyond the pilot phase for virtualisation, then the spotlight falls on the more important bit – the software to be run within the virtual machines.
From the perspective of the development environment, three facets of virtualisation spring to mind: the environment as a whole; validation and testing; and the role of virtualisation in deployment. Kicking off with the development environment, client-facing technologies such as Virtual Desktop Infrastructure and Application Virtualisation deserve more than a cursory glance.
I have to confess I was a little sceptical when I was first introduced to application virtualisation. This “provide-the-app-with-all-its-dependent-libraries-in-an-appropriate-package” sounded a little too much like static binding – was there really more to it? The answer lies in how multiple versions of the same application can be distributed, accessed and otherwise managed.
From an app dev perspective, a major challenge has always been keeping multiple versions of software tools going, for example linking the right version of the GUI builder to the files it creates – exactly the challenge that application virtualisation was created to solve. VDI builds on this theme, in that it enables multiple desktop configurations to be defined and deployed in the most appropriate manner.
In some cases, the preference might be on locking things down – for example, to provide a certain set of tools for a pool of contractors. In other cases, perhaps a subset of developers need as much flexibility as possible, and this can be offered without opening things wide for everybody.
Between VDI and application virtualisation, then, scope exists to give developers the tools they need for the jobs they are doing. While that’s arguably the same as any other computer user, note also that developer requirements tend to change more quickly than the average desk jockey. In areas such as evaluating new tools, the additional flexibility will no doubt be most welcome.
A resulting area of attention however, is in dependency management. It’s all very well giving people access to the tools they want and need, but the question is, when debugging something that was released a year ago, how straightforward will it be to reconstruct the environment in which it was created? “Theoretically possible” is not the same as “probable”, and it will only be by understanding the dependencies between tools, release versions and external services that such a picture can be redrawn.
This may be common sense to all you software configuration management gurus out there, but a cursory Web search reveals very little in the way of documented best practice, let alone actual tools support. We’ll just have to keep watching that space – of course, let us know if you have any experiences to share.
To testing and workload deployment
Software testing was the primeval swamp from whose banks the current slew of virtualisation technologies first emerged, and it stands to reason that virtualisation has to be a given in this area. What’s not to like, from a developer perspective, about being able to run up a test environment without needing to jump through hoops of accessing what is generally a highly constrained pool of kit?
Virtualisation not only makes this possible, it also offers new benefits including portability testing (you can run up VMs that ‘replicate’ multiple target environments) and performance testing (you can play with memory characteristics, CPU cores and so on to find out the score on a lower-spec’ed machine). This isn’t meant to sound like an ad by the way – this is the heartland of virtualisation activity, and these are well-trodden paths for many organisations.
But what of the workloads themselves? One school of thought suggests it makes sense to build applications with virtualisation in mind. Independent software vendors have been doing this for some time – it’s one way for example to offer pre-configured demonstration versions of their software. It’s also quite a straightforward transition for those ISV’s that have adopted an appliance model – think malware protection and WAN acceleration, for starters.
The same might be true for packages being built for internal customers: that is, deploying apps as VM’s might make for more straightforward deployment. But caveats abound, not least that an application relying on heavily I/O-intensive activity or lots of interaction with devices will require an appropriately designed target environment, which minimises physical bottlenecks. We know from your feedback, however, that this is not always the case.
It may be in some circumstances that application logic can be rewritten (or indeed, designed from the outset) to take account of such constraints, most of which seem to lie in that no-man’s-land between the virtual and physical worlds. But in many cases this will not be possible, or indeed advisable.
As a final point, virtualisation can also help post-deployment. The fact that a snapshot can be taken of a VM, possibly at the point at which something goes wrong, could be of great help to developers – though presumably you’d have to be pretty snappy with some applications. As with other areas of virtualisation potential however, it remains to be seen just how achievable this is.
Best practice will no doubt emerge, but in the meantime, it looks like developers will be left to suck it and see.
Through our research and insights, we help bridge the gap between technology buyers and sellers.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch