One of the most worrying things that’s coming out of recent research, both ours and other people’s, is just how wildly different attitudes to AI – and especially generative AI – can be between different levels or grades in the same organisation. There’s a danger as a result that some execs may be betting their company’s future on stuff they don’t properly understand.
For example, both Salesforce and IBM recently published the results of research they’d commissioned, and they share one thing in common: owners and CEOs are far more optimistic about the likes of ChatGPT than the employees who will actually be using those tools.
In the Salesforce case, 77% of UK owner/C-suite and 67% of director-level respondents said they were confident that they knew how to use generative AI safely and effectively, whereas only 29% said the same at supervisor or junior manager level.
Similarly, in IBM’s survey 69% of CEO respondents foresaw broad benefits in generative AI, but only 30% of non-CEO senior execs said their organisation was ready to adopt the technology responsibly, and just 29% said they had the necessary in-house expertise.
CIOs growing frustration with the AI bandwagon
Meanwhile our research looked at attitudes among CIOs and other IT leaders to the AI-related claims made by so many vendors now, and it revealed their growing suspicion and frustration with the AI bandwagon. Most reported that the majority of vendor pitches now claim some AI capability, but 88% said these frequently turn out to be exaggerated. Only 20% of the CIOs said they often come across genuinely AI-enabled solutions.
Given all that, what should we make of the many anecdotal reports of CEOs who believe AI will be an “industry disruptor” that upsets business models, improves productivity, and yes, enables them to make staff redundant? And what assumptions could we draw – even tentatively – from the various survey results?
An AI-washing epidemic
The first thing is that there’s too much AI-washing going on, with vendor claims that are, as one of my colleagues put it, unconvincing at best, or deliberately misleading at worst. The problem is that while most IT leaders are well aware of this, their non-expert C-suite colleagues may not be.
So my second assumption is that either some CEOs and non-execs are too accepting of the hyperbolic claims made for generative AI, or they may not realise yet that it is still just a tool, and the user needs training and expertise in order to get value from it. Either way, there’s a big risk that hype and FOMO (fear of missing out, an all-too-common trend in technology) could lead them to make poor investment decisions that’ll go up in flames.
They may also not recognise that AI is a very broad field. As well as generative AI, there’s process and workflow automation, which might indeed enable them to reduce headcount, and newer fields such as causal AI, which can help explain decision making and the causes for a decision. AI technologies are also vital in areas such as computer vision, speech recognition and self-driving cars, to name just a few.
Most AI still needs a human in the loop
And pretty much any business use of AI still needs human oversight from someone with domain expertise. How else are you going to avoid sending out material that contains a bin-fire of AI-generated hallucinations or confabulations*, or worse, confidential or copyright material? Even workflow automation AIs need a human in the loop to make sure they don’t over-react to alerts or activate by mistake.
So there’s two big tasks facing technologists today when it comes to AI. One is constantly remembering that there’s much more to AI than just the chatbots and image generators.
The other is cutting through the AI-washing to understand the real value of generative AI to the business. That means recognising its considerable limitations as well as its huge opportunities and how to exploit them. Unfortunately, it also means finding ways to communicate all that to your FOMO-filled decision makers.
*Confabulation, a term from psychology referring to the unintentional creation of false or distorted memories and facts, is gaining popularity in AI circles. Unlike ‘hallucination’, it does not imply false sensory perceptions or the creation of something unrelated to the AI’s training.
Bryan Betts is sadly no longer with us. He worked as an analyst at Freeform Dynamics between July 2016 and February 2024, when he tragically passed away following an unexpected illness. We are proud to continue to host Bryan’s work as a tribute to his great contribution to the IT industry.
Have You Read This?
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management
Tackling the software skills crunch