
Can we replace workers with machines?
This story was first published in the April 2025 edition of People Matters Perspectives.
Big Tech companies are busy spinning tales of a brave new world where AI agents can take over entire job functions. But before you replace your human workers with soulless algorithms, it’s worth understanding the stakes involved in overhauling your teams with machines.
The truth is, no matter how glowing the marketing campaigns of AI firms are – or how many zeroes are on the AI budget, for that matter – these systems are still far from matching the nuanced and irreplaceable capabilities of human workers.
Are companies devaluing human intelligence?
The CEO of Klarna, the Swedish fintech firm, brags about the company’s OpenAI-powered chatbot purportedly doing the work of 700 customer service agents. But while the company is cutting jobs and freezing hiring in favour of AI, it misses an important point: AI agents are still devoid of the critical human touch and the sharpness of judgment required for nuanced customer service.
Another example is Air Canada’s AI-powered chatbot, which got a customer’s refund policy wrong. A court ruling confirmed Air Canada was liable for the misinformation provided by its AI. It should have been painfully obvious to the airline’s leadership: AI is not foolproof, and when it fails, it can cost you more than just reputational damage.
Then there’s the case of Duolingo, which recently slashed 10% of its contractor workforce after shifting to AI for content translation. All this despite the fact that human translators bring something irreplaceable to the table – contextual understanding, cultural sensitivity, and the ability to decode the subtle meanings behind phrases.
In high-stakes environments, some AI models have a dreadful track record when it comes to decision-making.
IBM Watson for Oncology is a case in point. This medical AI, deployed at the University of Texas M.D. Anderson Cancer Center, failed spectacularly by suggesting incorrect cancer treatments that could have harmed patients. Despite being trained on vast amounts of data, Watson was unable to deliver the kind of informed, contextual recommendations needed for life-and-death medical decisions. The project, which cost US$62 million, was eventually scrapped, but not before it demonstrated the dangers of trusting AI with matters of life and death.
Similarly, Tesla’s autonomous driving system, while touted as the future of transportation, has demonstrated dangerous vulnerabilities. Researchers proved that small stickers on the road could trick the system into making a mistake as basic as driving into the wrong lane. These kinds of adversarial attacks expose a fundamental flaw in AI: it can be manipulated in simple ways that an experienced human driver might not.
Today’s AI agents might not be as smart as we think
A recent Carnegie Mellon University study tested AI agents like Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-based systems on workplace tasks. The results were startling: the top-performing agent only completed less than 25% of the tasks it was set, and others managed barely 10%. Tasks like multi-step instructions, understanding directions from virtual colleagues, and, most importantly, handling social nuances, all proved to be beyond these “cutting-edge” systems.
Indeed, AI excels at tasks that involve speed, accuracy, scalability, and knowledge retention, but when it comes to emotional intelligence, creativity, complex judgement, and context, it still falls short.
A piece of software might churn through thousands of data points, but it cannot connect with a customer, read the room, or make a tough call. In some cases, it can even be fooled.
Big Tech is just all too eager to hype up the idea of hybrid human-AI teams, where humans are relegated to “agent bosses” overseeing AI systems.
Microsoft’s Work Trend Index Report for 2025 painted an image of a future where “on-demand intelligence” and hybrid human-AI teams scale with agility and generate value faster.
The reality is that most businesses simply aren’t ready for a full-scale AI takeover. Today’s AI agents often struggle with out-of-scope queries, fail to communicate well with other systems, and make fatal mistakes when faced with ambiguous situations. When a customer has an issue that the AI wasn’t trained to handle, the result is either frustration or, even worse, incorrect information. In these scenarios, the human touch is irreplaceable.
This is where Microsoft’s notion of a “capacity gap” comes into play. The skills needed to manage these AI systems are not universal. Companies will need to invest heavily in training their human workforce to work alongside AI, and even then, there’s no guarantee that AI agents will ever truly match the intuition, creativity, and empathy that human employees bring to the table.
The world’s workforce isn’t asking for AI to replace them, but to augment and enhance their abilities. Human oversight remains critical when deploying AI systems in any context, particularly when safety, ethics, or life-and-death decisions are involved.
Before you rush to replace your team with AI agents, take a long, hard look at what makes your business successful. It’s not the machines that drive innovation – it’s the people behind them.