Agentic AI - the next competition for your job?
Agentic AI seems to be the latest up and coming buzzword. Much as the name implies, this is an application that has agency - it can not only respond to queries, but take action autonomously, and comes equipped with the tools to do so as well as the information to direct its behaviour.
If that sounds alarming, it's not an unusual reaction. Gurdeep Pall, president of AI strategy at Qualtrics, says that plenty of businesses are not ready to put AI agents on the front line - they prefer to keep these applications in the back office, and often restrict them to the meta process of creating knowledge.
Pall, together with Forrester senior analyst Rowan Curran, was speaking at a Wednesday webinar in a joint attempt to demystify the nature of agentic AI and the role that it can potentially play in the workplace. What they did surface during the conversation was a series of barriers in understanding and preparedness that suggests agentic AI is still some way away from competing with humans.
Trust is the biggest issue with AI
"How are they going to trust this thing to be the face of the brand?" Pall asked rhetorically. He was referring specifically to the reactions of Qualtrics customers, whom he characterised as being "extremely enlightened" in terms of how they apply technology for user and business experience - but who still get cold feet when it comes to agentic AI.
The surface issue, he said, is that the AI agent is not transparent to many users. Much like generative AI, there is something of a perceived black box in how such applications work.
But going deeper, agentic AI and in fact the entire direction in which AI is developing today represents a major shift in how humans are interacting with software: a shift from a deterministic program world to a probabilistic learning world.
"This is a pretty big mind shift because you get used to controlling every aspect of the software...we've spent, what, the last 23 years? Trying to build software that is robustly controlled. And now here we are in a probabilistic world where you are using incomplete information and on top of that, there's no transparency to how the models work."
In short, he said, it's not just about improving models and systems, it's about whether people's mindsets will change.
The reality of AI capabilities is not so exciting...yet
Open the black box, and it's not quite as close to Skynet or the Matrix as one might imagine. In fact, according to Curran, agentic AI is still very much at the beginning stage - what people are looking at today is only its as-yet-unrealised potential.
"Agents in enterprise today are being built around knowledge retrieval and information synthesis, but they're not yet reaching the point of complex autonomous decision making...The reality today is that we cannot build an enterprise level secure agentic AI system that has very broad-based knowledge activities like customer support and services," he said.
In other words, if an AI answers the phone, it's almost certainly still working more like a LLM: repeating content that it has been trained on, but without the capability to make decisions or take actions unless there is human input at every step.
There is also the gap in data strategy and data execution: if an organisation has good data practices and effective processes for retrieving data, it has a leg up in making AI work for it. But most enterprises don't have that, Curran pointed out.
It comes back to the mindset change, not only at individual level but at organisational level, he said:
"At the end of the day, an agent is another software application, and we have to learn to [build and use it] the same way as with other applications, and that's the biggest barrier."