Jon Collins, originally published on CIO Online

Every now and then, a certain model or other seems to have direct relevance to a whole series of challenges. It’s funny how it happens – at one stage (for me anyway) it was Eli Goldratt’s Critical Chain theories for project management (and the whole world started to look like projects), at another it was Rich Man, Poor Man by Robert Kiyosaki (and the whole world started to look like a series of investments), and elsewhere it has been Charles Handy’s Sigmoid Curve (which makes the whole world look like a series of false starts). Right now, as companies tussle with balancing capital expenditure (capex) with operational expenditure (opex), it’s the bathtub curve.

The bathtub curve? Yes, the bathtub curve. It does seem worth an explanation, since I’ve had to explain it every time I’ve mentioned it. “You know,” I say, “When you first implement something there are lots of faults – like snagging in a new house – and over time things settle down… but then after a while the number of faults starts to rise again?” And of course, everyone agrees, because it’s a very familiar picture for anyone working in IT.

I was first introduced to the bathtub curve when an old colleague of mine was working on the railways (or at least for the railway companies as a consultant) to try to help them save money. His findings weren’t pleasant reading. As UK railway companies (and before that British Rail) had tried to sweat the assets – rolling stock, track and the like – to the maximum, things had inevitably ended up right up the wrong end of the bathtub. Maintenance costs were huge, downtimes and delays frequent and so on, as indeed they still are. Trouble was, investment was only ever in one area – “We’ve funded another thousand miles of new track,” someone would say. But that would be without fixing the rolling stock, which would wear out the track more quickly, and so the cycle would continue.

There’s been plenty more written about the bathtub curve, so I won’t dwell – but I did want to relate the challenge with maintenance in general, to IT in particular: that is, when to make investments and upgrades? There will never be an absolute answer to this: while some recent data suggests that a three-year rolling cycle may be appropriate for desktop PCs for example, for a mainframe that may be the amount of time required between reboots. Ultimately, the bathtub curve gives us, for any ‘closed system’ (in IT terms, set of infrastructure assets that need to operate in harmony) a relationship between capex and opex. Capital expenditure will be required on a discrete basis, to replace older parts, upgrade software, and so on, whereas opex covers the costs of more general servicing, support calls, diagnostics and so on.

Wy is this important right now? We understand from out conversations with all but a few IT decision makers that “attention is turning to opex,” which roughly translates as “we haven’t got any cash for capital investments, but we do need to keep the engine running.” Which is fine, of course – but only if an organisation’s IT is still running along the bottom of the bathtub, as measured by SLA criteria and the costs of service delivery. Perhaps the most important question to be able to answer is, “How long have we got before things start getting expensive?” A simplistic question perhaps, but inevitable if things are not treated before time runs out for them. As another adage goes, “A crisis is a problem with no time left to solve it.”



Leave a Reply