Conversation practice with Ai

How I designed a new feature to help missionaries build real-life language confidence.

Imagine learning a new language in just six weeks. Then, moving to a foreign country to live, teach, and communicate effectively. Sounds crazy huh?

This is the challenge faced by Mormon missionaries at the Missionary Training Center (MTC). To help them in their language journey, they use Embark, a language-learning app.

As the lead UX designer on Embark, I worked with a team to create a new chatbot feature that simulates real-life conversations. We started by developing an MVP to prove its value, which led to stakeholder approval for a Beta launch. I led the design, testing, and iteration of the feature, which has received positive feedback from users and stakeholders.

Product

Embark App

Team

2 Designers
1 Project manager
2 developers

Role

Design lead

Background

Missionaries at the MTC have just six weeks to learn a language before moving abroad. The Embark team aims to provide the most effective tools for their preparation. Embark includes things like flashcards and quizzes, but these can only go so far.

Our project manager suggested using AI to simulate real-time conversations with a native speaker. This would let missionaries practice thinking quickly and responding in “real” situations.
It would make the experience more immersive by providing instant feedback, helping users improve their language skills faster without the need for in-person conversation partners.

This was unlike anything we had done before. We started by building a simple MVP version to test and gather feedback, while validating its value to stakeholders.

Project Goal

Launch MVP with stakeholder approval

Develop and improve an MVP AI chatbot to mimic real-life conversations, gather feedback to ensure it helps with language practice and confidence, and get approval for wider use.

Brainstorming

We began by brainstorming possible directions for the feature. Some of our initial ideas led to two potential directions:

Direction 1

Character conversations

A series of interactions with an AI character guiding missionaries through tasks like setting up meeting and giving lessons. Each step builds on the previous one, creating a simple storyline.

Direction 2

Scenario practice

A series of missionary-specific scenarios that users could practice over and over.

After presenting these ideas to the team, it was clear we should focus on Direction 2 for the MVP. This option would be simpler to implement, meaning we could be testing sooner than later. It also let missionaries practice as many times as they needed until they felt confident.

Research

I worked with language teachers at the MTC and my project manager to create a list of relevant scenarios. I then sent a survey to 50 missionaries to gather feedback on which scenarios would be the most helpful.

Takeaways

The Missionaries expressed a preference for two types of scenarios:

Scenario type 1

Teaching

This includes things like sharing scriptures and setting up teaching appointments.

Scenario type 2

Daily life

This includes things like ordering food or asking for directions.

Design

Once we chose the scenarios option, I created wireframes to settle on the best layout and flow. We made the scenarios replayable by using AI. It generated new details and characters for each session. These were tailored to the missionary’s assigned mission area, making the experience feel more realistic.

Initial wireframes used for brainstorming

We initially considered using AI to generate images of each conversation partner, but that ended up being too complex for the MVP. So, we dropped that feature. After gathering feedback from the team, I finalized the design and created a prototype. I tested the prototype to validate the basic flow.

Prototype

I then handed off the design to our devs who created our MVP.

Testing

I tested our MVP with 12 missionaries. While they were excited to use it, we identified a couple of challenges:

Results

It was overwhelming

The chatbot’s responses were too long and complex for new missionaries, especially those unfamiliar with the language. Out of 12 testers, 3 couldn’t finish the conversation because they felt overwhelmed and quit early. It was clear we needed to fix this.

I noticed that missionaries not familiar with the language really struggled. Out of the 12 we tested 3 were not able to finish he conversation. They got overwhelmed and quit the interaction early. It was clear we needed to improve this.

Improvements

Based on the testing, I worked on a few key improvements:

Simpfiying responses

I collaborated with our machine-learning engineer to simplify AI responses. We added a proficiency feature to adjust the chatbot’s difficulty to the user’s language level.

Updated beginner response (left) vs. old default response (right)

Help Features

To help less-experienced missionaries, I created the following help features. Our product manager was concerned that these features could make conversations too easy and slow language progress. To address this, we limited the number of times the help features can be used based on the user’s proficiency level.

Tap for meaning

Missionaries could tap any word the chatbot used to see its meaning and save it to their study list for additional practice.

Tap a word to pop up a sheet with the word's meaning.

Conversation Assistant

To help missionaries who don't know what to say or how to say it, we added an AI assistant.

Assistant modal (left), Conversation hints (Center) and How to say it (right)

Testing round 2

We tested the new help features and proficiency levels with 12 more missionaries.

Results

All Missionaries Completed the Conversation

The help features and adjustments for language skill levels made the tool easier to use and more helpful for missionaries. All participants were able to complete the conversations (a 33% increase), and the feedback was very positive.

User feedback:

Missionary learning Portuguese

“I really liked being able to practice a conversation. It understood everything I said, and I understood most of what it said. There were a few words I didn’t know, but I could tap on them to see the meaning.”

Missionary learning Spanish

“This feature is great. I like that it doesn’t just say something is wrong, but explains why, so I know how to fix it.”

Launch and next steps

After testing, we gave the project to a research team of language teachers to validate its educational value. Their study showed positive results. We got approval to roll out the feature in beta, starting with a few languages. The stakeholders and missionaries who have used the beta are excited about the feature and have provided positive feedback on its impact.