Designing an AI Chatbot & Boosting Completion Rates by 33%

Imagine learning a new language in just six weeks. Then, moving to a foreign country to live, teach, and communicate effectively. Sounds crazy huh?

This is the challenge faced by Mormon missionaries at the Missionary Training Center (MTC). To help them in their language journey, they use Embark, a language-learning app.

As the lead UX designer, I led the design of a chatbot feature that simulates real-life conversations. Starting with an MVP, we secured stakeholder approval for a beta launch.

Product

Embark App

Team

2 Designers
1 Project manager
2 developers

Role

Design lead

Background

Embark’s existing tools (flashcards, quizzes) help with vocabulary but lack real-time speaking practice.

Idea

Use ai to simulate realistic conversations

This would allow missionaries to:

  1. Practice thinking quickly in a new language.

  2. Get instant feedback on their responses.

  3. Gain confidence before real-life interactions.

 This was new territory for our team, so we started with an MVP to validate its value.

This was new territory for our team, so we started with an MVP to validate its value.

Goal

Launch MVP with stakeholder approval

Develop an ai chatbot that:

  1. Mimics real-life conversations for immersive language practice.

  2. Is simple enough for new learners to use effectively.

  3. Secures approval for a beta launch.

Brainstorming

We explored two potential directions:

Direction 1

Character conversations

  • AI guides missionaries through structured interactions (e.g., setting up meetings, giving lessons).

  • Conversations follow a storyline that builds over time.

Direction 2

Selected idea

Scenario practice

  • Users practice key scenarios (ordering food, setting appointments, sharing scriptures).

  • Each session is replayable with slight variations based on mission location.

After reviewing these ideas with the team, we decided to focus on Direction 2 (Scenario practice).

Why Scenario practice?

  • More flexible: Missionaries could repeat scenarios until confident.

  • Easier to iterate and launch for the MVP.

  • Directly tied to real-world needs based on missionary feedback.

Research

I lead a research effort to survey 50 missionaries to identify the most useful practice scenarios.

Takeaways

The Missionaries expressed a preference for two types of scenarios:

Scenario type 1

Teaching

(e.g., setting up teaching appointments, sharing scriptures).

Scenario type 2

Daily life

(e.g., ordering food, asking for directions).

Design

Once we finalized our approach, I got to work on the design.

Design feature

Replayable scenarios

  1. AI generates slightly different responses each time.

  2. Personalized to match the user’s mission location.

Two versions of the same scenario.

Design feature

Instant feedback

  1. AI corrects mistakes and explains why.

  2. Users receive specific guidance to improve.

Feedback (Left) Learn more sheet (Right)

Design feature

Tap for meaning

Missionaries can tap on unfamiliar words to see translations & save them for later.

Tap for meaning sheet.

Testing

I tested our MVP with 12 missionaries. While they were excited to use it, I identified a problem:

Results

It was overwhelming for new missionaries

  1. The AI responses were too long and complex for beginners.

  2. 3 out of 12 users felt overwhelmed and quit mid-conversation.

The chatbot’s responses were too long and complex for new missionaries. During testing, 3 out of 12 users couldn’t finish because they felt overwhelmed. Those who did complete it gave similar feedback, showing the need to simplify responses.

Improvements

Based on the testing, I worked on a key improvement

Improvement

Simpfiying responses

I collaborated with our machine-learning engineer to simplify AI responses. We added a proficiency feature to adjust the chatbot’s difficulty to the user’s language level.

Old default response (left) vs. updated beginner response (right

Testing round 2

We tested the new help features and proficiency levels with 12 more missionaries.

Results

All Missionaries Completed the Conversation

  1. 100% completion rate (33% improvement).

  2. Users found the chatbot easier to use and more effective.

The chatbot’s responses were too long and complex for new missionaries. During testing, 3 out of 12 users couldn’t finish because they felt overwhelmed. Those who did complete it gave similar feedback, showing the need to simplify responses.

User feedback

“This feature is great. I like that it doesn’t just say something is wrong, but explains why, so I know how to fix it.”

Missionary learning Portuguese

Results: Beta launch

After successful testing, we secured stakeholder approval to launch the chatbot in beta for a few languages. We are working to make this feature available in more languages.