fbpx

AI is smart enough – so are you ready?

AI is smart enough – so are you ready? 1508 892 Taras Semeniuk

Amid the constant flow of AI hype, where ‘every month can feel like 6 months’, it’s easy to miss a big development: for most practical applications, the raw intelligence of current AI models is already smart enough.

To put that into perspective, the current generation of LLMs is smart enough to create AI solutions that competently solve users’ needs.

Consider a Starbucks barista: you’ll probably notice the difference in quality between a new starter and an experienced barista, but would your coffee taste better if the barista had a PhD?

Instead of needing ever-smarter models for most business processes, the real impact now comes from how we use them.

That’s according to Braden Ream, CEO, Voiceflow, who shared his thoughts with Kane Simms in a recent VUX World podcast episode.

Without tools, AI agents can’t work

According to Braden, from now on, the gains we make will be a result of how we instruct these models, the tools we give them access to, and how we structure their operations.

AI agents that can perceive their environment, make decisions, and take actions to achieve specific goals are reliant on tool use. Without tools, the agent is just a text interpreter and generator. With tools, they become like a worker that can interact with other software, APIs, or databases to complete a task, like processing a credit card freeze or booking an appointment. This ability to act, rather than just respond, is where the transformative potential lies.

Keeping LLMs under control

A big hurdle for broader AI adoption, especially in risk-averse environments, is the “black box” nature of LLMs. Even their creators don’t fully understand why they produce specific outputs. This makes observability challenging because we can’t know exactly why a system made a particular decision.

The focus is no longer on understanding the internal workings. Instead, we must strive to create strict barriers to control what the agent can and can’t do. This involves:

  • Robust guardrails: Implementing strong instructional prompts and “hybrid” systems that blend deterministic logic (rule-based processes) with LLM flexibility. For instance, an LLM might be used for classifying a user’s intent or selecting from a human-approved list of responses, rather than generating entirely novel, unchecked answers. Multi-step evaluation pipelines, where an LLM’s output is checked against source data for accuracy, are also crucial.
  • Real-time monitoring: Shifting from micro-level observability to macro-level monitoring of the agent’s performance through analytics, transcript reviews, and system evaluations. This allows for rapid detection and correction if an agent goes “off the rails”.
  • Rigorous testing: Employing techniques like “bot-to-bot testing,” where AI agents are created specifically to test the production agent by simulating a multitude of conversational scenarios, including edge cases.

The metamorphosis of conversation design

The rise of LLMs has led some to declare “conversation design is dead”. However, the reality is more nuanced. While the turn-by-turn scripting of traditional chatbots may be fading, the need for designing and curating effective conversational experiences is greater than ever.

The role evolves from writing dialogue and flows to becoming an owner of the user experience or a custodian of the AI’s interaction quality.

To get there, conversation designers should:

  • Become an expert in prompt engineering: The core logic of the AI agent, its persona, and its operational boundaries are now often defined within the prompt. Effective prompting is becoming a critical skill.
  • Gain business process understanding: To create effective prompts, a deep understanding of the business process the AI is meant to handle is essential. Many businesses may first need to define and even redesign their existing workflows before they can effectively automate them.
  • Focus on user experience: With LLMs handling much of the “messiness of language”, designers can focus on higher-level aspects: Is the agent helpful? Is it aligned with brand voice? Is it achieving the user’s goal efficiently and ethically?

SMBs are leading the innovations

Interestingly, the adoption of generative AI isn’t being led by the giants of the enterprise world, but rather by more agile small to medium-sized businesses (SMBs) and mid-market companies.

For them, the potential rewards outweigh the risks. For an SMB, a highly effective AI agent could be a significant differentiator, directly winning business or substantially improving customer experience materially. They have less to lose and more to gain from experimentation.

Large enterprises are experimenting but are more cautious. Their priority is often brand trust and stability. They don’t win business because their chatbot is revolutionary, but because they are perceived as safe and reliable.

Start here

For individuals and businesses looking to build with AI, the advice is clear: dive in and experiment. As Braden said, the models are now smart enough to build what you need.

It’s up to you to use it to its strengths. The potential to create value – whether in customer support, sales, internal automation, or entirely new applications – is huge.

Those who learn to work with the current generation of models (rather than waiting for it all to be solved in the next release), who focus on robust guardrails and skilled human oversight, will be best positioned to thrive.

Thanks to Braden for his appearance on VUX World – be sure to watch the full interview.

    Exclusive report: How to identify and evaluate agentic AI platforms for CX
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap