Augmenting users or replacing them?
A rule of thumb to know before deciding to speak about AI
My last two conversations on AI went like these.
1.“I´m scared that the AI could eventually come and beat me up. Have you watched that film, Elysium?” (The reference was to the scene when Matt Damon is stopped and searched by robots).
2. “I exclude that any AI will ever be able to do my job”
I cupped my face in my hands and went like: “Blimey!” . Both comments say a lot about how strong everybody feels about AI right now. But they also show how big the ultracrepidarian front —that is, people who are keen on giving their opinion on a subject they know little to nothing about — has grown in this particular field.
Indeed, AI is an area fraught with complexities and I believe the mindset “I-know-exactly-how-this-will-pan-out” just does not work well here.
Instead, there are some guiding principles that it´s useful to be aware of. They could help us frame a conversation on AI on more realistic and scientific terms.
The rule of thumb I´ll focus here explains what´s behind the fundamental decision to replace an existing manual propcess (automation) vs combining AI and human agency to improve parts of, or the whole of a process (augmentation) (I. Bojinov, hbr.org).
Generally speaking, driving the choice is the cost of making errors in a particular scenario. If the risk associated with an inaccurate prediction by the AI is high (a medical diagnosis, for instance), then human supervision and decision making remains essential and can´t be replaced.
Conversely, if the cost is relatively low (say, recommending a product which is not spot-on for an online shopper), automation can be both safe and cost-effective for a company.
At the current state of the technology (this could change in the future), the idea that AI can totally replace human agency is inaccurate. But so it´s the notion that an AI can´t , at least in theory, be applied to any sort of use case: the fundamental question should be whether it is safe to do so and what are the implications.
It bears keep in mind that the replacement route has its reason-d-etre in a need to automate tasks that are unsustainable for the enterprise to expense for and for human employees to perform: who wouldn´t like a little help from a technology, when it comes to sifting through hundreds of documents?
With that said, the fear that unscrupulous employers could take human replacement a stretch too far is not one to be dismissed. But this would be matter for a parallel discussion on AI ethics and employment laws, something perhaps less intrinsic to the technology than it is to human behaviour and goodwill.
To me the soundest approach sets out to use AI to enhance human intelligence, rather than to operate independently of it (see. also How to balance human and machine intelligence).
What people need to be mindful about is that whenever processes get modernised or transformed (with or without an AI), the nature of their job and relative skillset might have to change.