Over the last year or so Freeform Dynamics has asked Reg readers if the new flexibility in deploying workloads in virtualised server infrastructures makes it possible to manage application delivery effectively while minimising the costs associated with delivering services.
We have also considered the challenges around specifying the server hardware necessary to deliver services as cost effectively as can be achieved.
It has been noted that while examples of good management process remain elusive, the administration of virtualised server infrastructures is becoming more amenable to rapid change as circumstances vary, This development is more than fortunate as organisations start to move beyond straight forward server consolidation projects targeted to save capital expenditure, and maybe to lower the associated electricity bills, and begin to search for added value from their existing infrastructure investments.
The challenge now is thus shifting to focus on what factors organisations need to consider when looking to dynamically allocate server resources in response to fluctuating business requirements. What criteria should drive workload selection and service allocation in heterogeneous server estates?
A number of metrics to be considered spring to mind, and the main factors are reasonably clear: what physical resources are up, available and securely patched/protected? What new services need to be brought online? What capacity do these systems have to host additional services, with the corollary what services are running on which platform?
We then move to the more pointed question of whether there are any services that could be moved from higher performance platforms to lower performance systems. Does the business importance of the service and the SLAs in place for it allow for change? And what are the service quality requirements of the new workloads?
Finally there is the usual hot potato: what are the political impacts and user challenges generated if running services are moved to lower performance tiers or shut down altogether? To get a grip on some of these we appear to be witnessing a growing interest in using some form of service/resource consumption chargeback modelling to report into the usual IT/business budget setting/blame game discussions.
Of course, while these should be the important criteria, there is nothing to stop users bringing a range of other factors to bear, especially when they shout loudly at the board meetings.
Now getting the base information for any of these is not too difficult, at least if the company has invested in the appropriate management and monitoring tools, especially asset management and inventory discovery. Granted this is a very big “if”, as we know that asset management tools have yet to be very widely deployed and used as part of routine IT systems management. But having the time to put information together and make sense of user service requests is another matter entirely.
Clearly there is a need to develop some policies around service provisioning and service performance/platform migration. We also know from regular feedback from Reg readers that getting senior management buy in to any form of IT service policy can be problematic.
Getting agreement to change service quality and performance metrics on the fly without in-depth consultation with end users is likely to be a serious political challenge. Getting policies set up thus enables IT to respond to requests from this same group of people as rapidly as they wish. As is obvious, such agreements necessitate the involvement of the most senior line management in setting policies and in ensuring that their staffs understand them.
An alternative, where the IT resources are limited, is to seek to move only applications and services that are known to have no active users or that are only accessed on particular occasions, preferably ones known well in advance. Without that, the final remaining option is to look for an IT solution that can effectively vary service levels without users perceiving the change, ie change things incrementally and hope this frees up the resources required for the new service.
In all of these approaches there is every possibility that some form of charge back service pricing could prove to be useful. Having resource utilisation and quality levels reflected in the amount of money users pay or in budget reconciliation processes and reporting could turn out to be a not so secret weapon in the IT blame games and internal political wrangling.
It should be noted that there is a now a wide range of systems tools available to help manage many of these challenges in virtualised environments. These include automation technology to move workloads around without the need for scarce, expensive IT labour to be brought to bear. But without effective “policy” being established, IT staff will inevitably end up having to change things on the fly and that, as I know to my cost, is fraught with risk.
Tony is an IT operations guru. As an ex-IT manager with an insatiable thirst for knowledge, his extensive vendor briefing agenda makes him one of the most well informed analysts in the industry, particularly on the diversity of solutions and approaches available to tackle key operational requirements. If you are a vendor talking about a new offering, be very careful about describing it to Tony as ‘unique’, because if it isn’t, he’ll probably know.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch