Designing an AI Chatbot & Boosting Completion Rates by 33%
Imagine learning a new language in just six weeks. Then, moving to a foreign country to live, teach, and communicate effectively. Sounds crazy huh?
This is the challenge faced by Mormon missionaries at the Missionary Training Center (MTC). To help them in their language journey, they use Embark, a language-learning app.
As the lead UX designer, I led the design of a chatbot feature that simulates real-life conversations. Starting with an MVP, we secured stakeholder approval for a beta launch.
Product
Embark App
Team
2 Designers
1 Project manager
2 developers
Role
Design lead
Background
Embark’s existing tools (flashcards, quizzes) help with vocabulary but lack real-time speaking practice.
Idea
Use ai to simulate realistic conversations
This would allow missionaries to:
Practice thinking quickly in a new language.
Get instant feedback on their responses.
Gain confidence before real-life interactions.
Goal
Launch MVP with stakeholder approval
Develop an ai chatbot that:
Mimics real-life conversations for immersive language practice.
Is simple enough for new learners to use effectively.
Secures approval for a beta launch.
Brainstorming
We explored two potential directions:
Direction 1
Character conversations
AI guides missionaries through structured interactions (e.g., setting up meetings, giving lessons).
Conversations follow a storyline that builds over time.
Direction 2
Selected idea
Scenario practice
Users practice key scenarios (ordering food, setting appointments, sharing scriptures).
Each session is replayable with slight variations based on mission location.
After reviewing these ideas with the team, we decided to focus on Direction 2 (Scenario practice).
Why Scenario practice?
More flexible: Missionaries could repeat scenarios until confident.
Easier to iterate and launch for the MVP.
Directly tied to real-world needs based on missionary feedback.
Research
I lead a research effort to survey 50 missionaries to identify the most useful practice scenarios.
Takeaways
The Missionaries expressed a preference for two types of scenarios:
Scenario type 1
Teaching
(e.g., setting up teaching appointments, sharing scriptures).
Scenario type 2
Daily life
(e.g., ordering food, asking for directions).
Design
Once we finalized our approach, I got to work on the design.
Design feature
Replayable scenarios
AI generates slightly different responses each time.
Personalized to match the user’s mission location.
Two versions of the same scenario.
Design feature
Instant feedback
AI corrects mistakes and explains why.
Users receive specific guidance to improve.
Feedback (Left) Learn more sheet (Right)
Design feature
Tap for meaning
Missionaries can tap on unfamiliar words to see translations & save them for later.
Tap for meaning sheet.
Testing
I tested our MVP with 12 missionaries. While they were excited to use it, I identified a problem:
Results
It was overwhelming for new missionaries
Improvements
Based on the testing, I worked on a key improvement
Improvement
Simpfiying responses
I collaborated with our machine-learning engineer to simplify AI responses. We added a proficiency feature to adjust the chatbot’s difficulty to the user’s language level.
Old default response (left) vs. updated beginner response (right
Testing round 2
We tested the new help features and proficiency levels with 12 more missionaries.
Results
All Missionaries Completed the Conversation
User feedback
“This feature is great. I like that it doesn’t just say something is wrong, but explains why, so I know how to fix it.”
Missionary learning Portuguese
Results: Beta launch
After successful testing, we secured stakeholder approval to launch the chatbot in beta for a few languages. We are working to make this feature available in more languages.