Analyst Opinion
AI might not be the first thing you think of when you start listing the impacts of the Coronavirus lockdown, but the Covid-19 crisis is having effects far beyond the obvious ones.
It came up recently when we were asked about AI-enabled databases. This is the idea that, by integrating an AI server into an in-memory database, you can cut out a stack of the back-end data traffic and speed things up considerably.
Why the back-end matters for AI/ML
We tend to think of inferencing – the operational part of machine learning – as a relatively ‘closed’ application that’s much less resource-intensive than training the AI in the first place. However, there’s also a class of AI applications where in effect you’re constantly monitoring and retraining the AI engine as it goes along.
Even with fast processors at the sharp end, this involves a lot of data exchange with the back end where normally the developer will have a computation server communicating with a database. So the idea of integrating the AI server into the database, which reduces the number of transactions needed, offers useful benefits for those AI applications that need constant or frequent retraining.
So where does Covid-19 come into this? Well, most machine learning systems are trained using large volumes of historical data, but we’re now seeing dramatic changes in behaviour as businesses and individuals adapt to new risks and rules. Buying habits, travel patterns, logistics, energy consumption, even things as unexpected as the proportion of flour production selling into the domestic market versus the commercial market – all these things and more have shifted.
As a result, AI systems that were once relatively stable have become significantly more volatile. What’s worse, as the recovery and reopening unfolds in a piecemeal and even haphazard manner, with different countries and industries unlocking at different rates and in different ways – and of course with no certainty over what the virus will do next – this volatility and variability is likely to last for quite a while.
Are your models still valid?
We expect this to translate into a growing need for, and expectation of, AI monitoring and retraining. Quite simply, the model that worked last week may well no longer be applicable this week.
Does this mean we will see more database developers integrating AI capabilities? That’s a little less certain. Developers have been used to choosing the database and AI engine that best suit the task at hand, and that’s likely to persist for some time. However, we do expect to see architectural changes, with the AI and database elements brought closer together as the need for retraining becomes more apparent.
Of course, this doesn’t apply to every application of AI – as well as the set of machine learning applications that need constant monitoring and retraining, there’s also a bunch of applications that need it less.
However, if you are running machine learning systems, now might be a good time to check that you’re monitoring the latter variety just enough to ensure they’ve not slipped into the former category while you weren’t looking.
Originally published on Freeform Dynamics’ Computer Weekly Blog – Write Side Up
Bryan Betts is sadly no longer with us. He worked as an analyst at Freeform Dynamics between July 2016 and February 2024, when he tragically passed away following an unexpected illness. We are proud to continue to host Bryan’s work as a tribute to his great contribution to the IT industry.
Have You Read This?
Generative AI Checkpoint
From Barcode Scanning to Smart Data Capture
Beyond the Barcode: Smart Data Capture
The Evolving Role of Converged Infrastructure in Modern IT
Evaluating the Potential of Hyper-Converged Storage
Kubernetes as an enterprise multi-cloud enabler
A CX perspective on the Contact Centre
Automation of SAP Master Data Management