Last night I went a Downtown (Orlando) UX Meetup about Conversational Design.

Matt Lavoie did a wonderful job hosting a panel of experts who work with conversational interfaces daily. We got to hear from Arun George - Voice UX Designer @ VoxGen and Organizer of Orlando Voice UX, Sam Artioli, Chief Technology Officer at Abe.ai, and Rob Guilfoyle, CEO at Abe.ai.

One of the biggest takeaways for me was that conversational design, not just simple voice command and phone trees, but real conversational user interface design, is still very new and we are still exploring how to actually design and document non-linear conversation flows, something I had been discussing and trying to figure out with a client of mine recently. There was a whole lot of other interesting information shared as well, so here are my notes. I hope you enjoy! :)

Benefits of Conversational User Interfaces

Conversational user interfaces release us from buttons and form fields, which is great. But, when users can say anything, we must be prepared for everything.

Voice interfaces can convey empathy through tone.

Challenges of Conversational User Interfaces

There is no menu. When talking to Alexa, Google Home or Siri, you don't have a visual reference to everything the interface can do.

This is often best handled with product marketing. Send the user regular emails hilighting new and interesting features. Or have the bot tell users exactly what they can say in a given context (partially directed dialogue).

Understanding new forms of communication like new slang terms or emojis is an ongoing struggle. 😜

People don't want to take directions from a female voice. They prefer a "copilot" (yeah, I know... 🙄).

How we speak is not how we write

You can't use the same speech in both text and voice interfaces.

People use contractions when they talk more often than when they write.

In conversations we expect context and acknowlegement. We know when we've been asked the same thing more than once, and it gets annoying quickly.

The reason most people hate automated voice systems is that they are usually designed poorly and are not conversational. They usually have linear flows and people speak in non-linear ways.

"Context is king"

Sometimes people get annoyed when bots talk too much. Maybe bots don't need to be chit-chatty.

Handling unexpected/invalid responses

"Rapid reprompt" is a way to gather small bits of missing information from the original prompt. For example, a voice interface might ask for your phone number with area code. If the speaker gives their phone number without area code, a poor way to handle it would be to reject the user input entirely and respond with "Sorry, I didn't get that." Instead, a "rapid repromt" for the missing area code could be used, as in "Thank you. And your area code?"

A way to handle unexpected input, is to make sure you hand the user off to the platform that can answer their question if the conversational interface cannot, like a web page, or customer service rep. People will quit if they get stuck in an error loop for tool long, and it doesn't take long.

Bot responses can be given a confidence rating (How confident are you that this answer will help the user). If a confidence threshold isn't passed, you can either pass the user off to a "Mechanical Turk", a person acting in place of the bot, or just hand them off to customer service.

What its like to design a conversational UI/UX

Discovery

  • Understanding the brand voice
  • Recording/listening to real conversations with customers
  • Identify all of the possible ways a person can say something

Sample Dialogue

  • A very simple word doc that prototypes an ideal conversation around a given interaction
  • An exercise used to confirm language and tone with client

Usability Testing

  • "Wizard of Oz" testing

Some vocabulary that was mentioned

  • States
  • Intents
  • Entities
  • Scenarios
  • Modes (for example "gather" mode)

Some tools that were mentioned

  • api.ai
  • wit.ai (I think?)
  • K means clustering
  • SSML (HTML of voice)
  • Lisnr