The first thing to know about hyperautomation is that it is everywhere. Whether it’s speech recognition to route a call to a bank, or Facebook suggesting who you might like to tag in a photo, this AI-based automation is all around us.
So how is it different from traditional computer programming you might ask? After all, software has always been used to automate tasks. And, as more and more work moved into the digital world, more and more of that work could be automated: whilst it’s hard to automate paying with a physical banknote, it’s relatively easy to automate paying using an online bank transfer.
But there were always some tasks that seemed simple for a human (such as recognising a face) but were hard to program into software. This is where AI really comes into its own: when the code is hard to write, or the required logic might need to adapt over time. And in the last decade there have been huge leaps forward, particularly in machine learning (ML) where algorithms are “trained” by huge sets of data. This advance has been spurred on by the huge volumes of data now collected by IT systems and the ready availability of increased computing power.
In many cases, the use of AI is a useful complement to the human mind, rather than a replacement. One example of this that I have seen is when AI is applied to fraud detection: the approach of combining AI algorithms together with traditional rules created by people gave the best results (boosting fraud detection levels by 20-30%).
Humans are good at “getting into the mind of the fraudster” – imagining how fraudsters might act and creating rules to catch them. On the other hand, where AI shines is by seeing patterns in historic data that would be hard or impossible for a human to spot, but which can be used to detect and prevent fraud in the future.
So why is hyperautomation not new, and not old?
Consider the time when people used to order clothes from a glossy catalogue via mail-order. They would diligently complete a paper order form, add up the total value, write a cheque, put it all in the post and then, maybe 10-20 days later, their goods would arrive. Compared with the convenience and speed of today’s e-commerce capabilities, it seems almost ‘Jurassic’.
But in ten years’ time, the idea of ordering goods online may seem just as antiquated. Having to remember usernames and passwords, find the best place to buy from, choose a delivery option, pick a delivery slot, and enter payment details by typing in a long number embossed on a physical plastic card.
Instead, an AI-based personal assistant will be able to step in. For mundane purchases we probably won’t need to be involved at all, whereas for larger and more considered purchases our AI-bot might present us with several options.
This scenario may sound far-fetched, but the actual barriers to achieving this type of automation are less about technology, and more about the issues of responsibility and trust. An AI-bot can’t be fined or sent to prison. So ultimately, who is legally responsible if things go wrong? If an AI-based financial assistant decides to invest a large amount of money in stock that then crashes, who is to blame? Is it you for trusting the technology or the company providing the bot? Or the regulators for not policing the transaction?
- The Wheels Are in Motion: Five Key Enterprise Digital Technology Priorities for 2022 – read more about hyperautomation here
These types of legal and ethical issues will need to be addressed and, indeed, regulations are already emerging to govern how AI can be used ethically. For example, the EU’s proposed Artificial Intelligence Act suggests classifying AI-based systems depending on the level of risk they pose, with an “unacceptable risk” category which will be banned (except for ethically conducted research).
Consumer attitudes will also be key to the adoption of hyperautomation. I was recently involved in some research that examined people’s willingness to use checkout-free stores (such as Amazon Go) where customers simply walk into a store, pick up what they need, and leave, with their account automatically debited with the correct amount.
We found that people were much more willing to move to a fully cashless society than to use checkout-free stores. However, we also found that social norms and behaviours don’t necessarily evolve as fast as technology. Digital payments have had 40 years to enter the public psyche but, until very recently, walking out of a store without paying was literally a crime.
What’s needed are “trust steppingstones” that help people become comfortable with these new technologies. For example, something as simple as a green light illuminating when you leave the store to signal those customers have paid, might help consumers become comfortable with these new innovative ways to shop.
In addition, we found that once something is seen as “convenient enough” then making something even more convenient is not enough to persuade people to adopt a new technology or product. This is, of course, particularly true if there is an investment of time or money required. Providing other benefits, such as rewards, loyalty points, faster service and cost savings, etc., will often be needed if providers wish to stimulate adoption of new solutions, even if they offer a convenience advantage.
There is no doubt that in the next decade we will see hyperautomation applied to many more aspects of business and our daily lives. What we have seen so far is just the beginning and what comes next is going to be hugely exciting. The rate of change will not be limited by technological advances, but by the need for effective legislation and for people to become comfortable with its adoption. However, what is certain is that the world of commerce will be unimaginably different to what we know today.
By David Daly, Worldline Discovery Hub Editor-in-Chief.