fbpx

7 reasons why you need a conversation designer

7 reasons why you need a conversation designer 2560 1920 Ben McCulloch

Do you know what a conversation designer does? If you do, then chances are you work in the Conversational AI industry.

The rest of the world doesn’t seem to get it yet. Conversation designers have to remind people of their value constantly.

That’s a shame because those who don’t know will often treat every automated conversation as a tech problem. In truth, it’s both a tech problem and a human problem because the system must communicate with humans in a way that feels natural to them, otherwise the experience will be terrible.

Fundamentally, the value of the conversational interface is that it should be easy to use. That’s the entire point. We know how to talk, so we just talk. It can’t be a system that expects humans to adapt to it. We already have those – they’re called programming languages, mice and keyboards. If you want an automated conversation to work for most people, it must communicate in a way that feels human, and yet not convince the user that it is human.

Although we can see the value, it can be a huge challenge for designers and product leads to communicate the value of conversation design to their stakeholders. Teams that should have a designer sometimes go without, often because they couldn’t convince the budget-holder to pay for a designer.

With this article we’re aiming to highlight the rationale for hiring conversation designers. Elaine Anzaldo deserves a huge shoutout. In her ‘Becoming a senior conversation designer: how to tell when you’ve made it’ article, where she wrote about the need for conversation designers to have a one-pager that communicates their value. This article was inspired by that.

So buckle up, conversation design teams! Here comes some extra ammo for your arguments. Use them however you want. This is absolutely not every reason why conversation design matters, it’s just those reasons that seemed most apparent to us.

Here we go!

7 Reasons why you need a conversation designer

  1. Teams often treat Conversational AI as a tech problem
    Unfortunately the robustness of the system does NOT equate with the quality of the experience. Conversation designers focus on improving experiences, and they do it by modelling automated conversations on human conversations. For example, Jonathan Bloom spoke about how ‘there’s no such thing as errors in conversations’ because we continually reestablish shared understanding. It’s called grounding.
  2. Conversations are complex, but there are strategies at play
    In order to understand what’s going on, we shouldn’t only look at software development techniques. It’s estimated that humans have been talking for 300,000 years, and writing for 5000 years. Communication has evolved to be highly nuanced, but just because you’ve been communicating all your life doesn’t make you an expert. The good news is there are experts we can learn from, such as conversation analysts like Elizabeth Stokoe. She argues that conversations follow some similar patterns. We’re not talking about the hard certainty of patterns of 1s and 0s, though. We’re talking about frameworks for understanding conversations and picking them apart because humans don’t communicate the same way machines do. We need tech experts on teams to understand and improve what happens on the machine’s side, and we need user experience experts to understand and improve what happens on the human side. Those experts are conversation designers.
  3. Your conversation designers are insurance against wasted time and money
    On one side, you have a machine, and on the other side, you have a human. While human communication is rooted in human language, machines are faking it. It’s putting a human mask on machine code. In simple terms, we need someone who knows how to design the best type of mask for the situation. We always want the machine’s communication to be as human as possible so that it feels natural for the user, but we don’t want to abuse the user’s trust. What a conversation designer does is model the interactions based on what they know about humans, and then they seek to improve the conversations once the system has been made public and they’ve obtained data. Poor assumptions are expensive because the wrong things get built. Having someone who seeks to root the design in reality protects you against the risk of wasting time and money on the wrong assumptions.
  4. You might be focused on the wrong use case, or too many of them
    While it would require someone to write a lengthy book to discuss what happened with Amazon Alexa and Google Assistant and why they didn’t reach market expectations, there are two things that are apparent to users of those systems; at first they appear as if they can do everything (yet they can’t) and Alexa in particular will try to push you towards Amazon services at the most unsuitable times. When designing conversations, the use case is your number one priority! It must solve a genuine user need (and not try to solve all of them, while forcing the business need of profitability on users). This is why conversation designers do up-front research on user needs and have the whole team focused on the user’s goals throughout.
  5. You might be reading data wrong
    There are some metrics that the industry took to heart. An example is the ‘containment rate’, which represents the amount of people who communicate with AI without human intervention. The clue to why this is risky is in the name – ‘containment rate’ could also mean ‘trapping rate’ or worse, ‘imprisonment rate’. What you actually want to know is, how many people were helped by AI? If they only spoke to AI, and it gave them what they needed, then great job! On the other hand, if those users were in fact locked in an unhelpful conversation, then that’s obviously a problem. The containment rate doesn’t describe the success of the interaction. Conversation designers seek to understand what users are experiencing in automated conversations, and they don’t rely on metrics alone.
  6. Quantitative data is the speedometer, Qualitative data is the landscape
    You absolutely should be collecting data from conversations. They give you an understanding of the successes and failures at specific moments in conversations. Everything is generalised, though. You get an idea of what’s happening, but you don’t really know why. Plus, both the data and the observer can be biased. What you really need is to have conversations monitored by someone who understands them, to understand that qualitative data better. They don’t need to read every transcript, though – just a regular snapshot is enough. That way, you’ll see the user’s struggles and understand them. Cobus Greyling recommends the whole team read transcripts together on a regular basis. Conversation designers aim to understand what the user is going through when things go wrong. From here, they can empathetically design solutions to make the experience better for everyone.
  7. Is ChatGPT having the right conversations with your users?
    Let’s unpack this one. There are so many reasons why you need to be careful when using LLMs to generate responses. We’re not saying they’re bad. They’re utterly incredible. You just have to know what you’re doing, and conversation designers have a head start when it comes to human-computer conversations.

    Here’s a few reasons why you need to be careful:

  • Most importantly, how do you know LLMs are succeeding in conversations? How do you improve them? Conversation designers can unpick conversations and try to understand where they’re going wrong, so they can be improved
  • LLMs are usually trained on documents rather than conversations. We don’t write how we talk. We write a formal text differently from how we write an sms. When using LLMs to generate conversational utterances, you need someone who can give it the right tone, and stop it from sounding like an overtly confident but totally insecure social chameleon
  • Do you know why LLMs shouldn’t be trained on private conversations? Because that would be a horrible mishandling of data privacy. PII data should not be sent to an LLM – it’s just too risky. Users must trust their conversational partner, otherwise they might not want to talk. Conversation designers consider things from the user’s perspective, such as how they might feel about having their trust abused

Are we on the money here?

Well, are we on the money? That’s essentially the first question, isn’t it? That’s why we felt compelled to write this article. While we know that conversation designers have a great deal of benefits they can bring to the team, their first job is often to prove their value. Then they have to keep proving it every day, while doing their job.

What do you think? Would you add anything? Please share your thoughts however you want – blogs, social media discussions. The more we can help each other the better.

    🚨 Our 15 AI Trends in CX Insights Report is out now!
    This is default text for notification bar
    Share via
    Copy link
    Powered by Social Snap