Incorporating AR to immerse language learners struggling with reading Japanese
Kawaru is a Japanese language app helping learners with reading kanji using AR technology.
I was tasked with researching and identifying key features to build a MVP for Japanese language learners.
Japanese utilizes three writing systems in a given sentence: hiragana (represents sounds), katakana (also represents sounds but for spelling foreign words) and kanji (represents a concept and idea).
Learning kanji is considered the most difficult aspect of Japanese as each kanji has multiple readings depending on context in addition to memorizing stroke orders to write.
These factors prevent learners from fully immersing themselves in Japanese writing and reading even if they have mastered the other two writing systems.
Since kanji are ideograms, incorporating AR technology seemed a fitting solution to help learners identify and read unfamiliar kanji they encounter in a textbook or outside the setting of a classroom.
I needed to find out if learners were open to using AR to help them with kanji, their difficulties with learning new kanji and how AR is currently used in language learning.
According to a 2017 Harvard Business Review article, AR has already redefined instruction, training and coaching to improve workforce productivity. AR in language learning however, is still in its infancy with the only major contender being Google Translate using Google Lens.
However, with strict travel restrictions still imposed by countries like Japan, there is value in adapting AR to help language learners to immerse themselves no matter their location.
To discover the strengths and weaknesses of current apps Japanese language learners use to study kanji, I conducted a competitor analysis and found:
The most common ways to search up kanji were typing, drawing, by radicals or voice input
Google Translate was the only app with an option to translate by camera using AR but with inaccurate translations
Jisho and Yomichan provided the most accurate translations and customizations specific to Japanese language learning
To better understand what Japanese language learners value in a resource, their main goals and current pain points with learning Japanese, I conducted user interviews with 6 participants.
The responses validated my assumptions that kanji is the biggest obstacle in learning Japanese and that learners are open to using AR to help them with kanji.
said their main goal is to become fluent to converse with native speakers and consume native material
found Kanji to be the most difficult aspect in learning Japanese due to memorizing all the readings and stroke orders
use multiple resources to learn Japanese but prefer a more centralized approach
never thought about using AR to help them with Japanese but were willing to try it out to learn kanji
Upon finding commonalities from the user interviews, I focused on outlining how users learn new kanji to explore possible solutions to ease their kanji learning.
Diving deeper into the first task of learning new kanji, searching, learners cited looking up kanji commonly by pasting it into the search bar of their favorite dictionary, by radicals or drawing it by hand.
Searching by radicals can be tedious due to sifting through an array of options whereas drawing by hand can be time consuming for kanji with stroke orders in the double digits. Copy and pasting an unknown kanji is the easiest and fastest solution but what happens if it's a kanji from a restaurant menu or the headline of a Japanese news segment?
Having a search by camera option for the app felt fundamental to address this issue and expedite the kanji learning process. This is similar to Google Translate but simply having the English translation is not useful to a Japanese language learner. Using a phone camera to identify the kanji through AR and having the correct reading appear above would improve the learner's reading skills of all 3 writing systems.
How users will navigate the app and what they will accomplish in the MVP was the next challenge. It needed to be simple and straightforward for users to easily get the readings and meaning of the kanji using the app and their phone camera.
I kept branding simple to focus attention on designing the AR aspect. I chose the name Kawaru which seemed fitting with the inclusion of AR as it means 'to transform' in Japanese.
Sketching out many versions of the home screen led me to placing the four different ways to look up unknown kanji as icons front and center.
To aid participants in visualizing AR during the tasks, I used a mix of Figma's Smart Animate and transition screens for my high-fidelity prototype.
In-person testing was conducted with the high-fidelity prototype pre-loaded on a phone while remote testing was conducted using Zoom and phone screen sharing. There was a total of 5 participants.
For participants unfamiliar with AR, I had them try Google Translate to experience it beforehand as a reference point.
All 3 tasks were completed with relative ease by all 5 participants and every participant said they would use the app again to help them with kanji due to its convenience.
The main issue during user testing occurred when participants were asked to change the language input and output. (Fig 1. and Fig 2.) I learned that participants weren’t used to Japanese being broken down by writing systems on translation apps.
To fix this, I contained both the input and output options to one element so upon being clicked on, a pop-up menu shows all of the language options at a glance. (Fig 3. and Fig 4)
Along with these changes, I polished the home screen for a more cohesive design throughout the app.
1. Search by camera option using AR to identify kanji
2. Translate kanji to kun'yomi and on'yomi readings in hiragana or katakana
3. Save photos and bookmark kanji readings to study in the future
This project was not only challenging but allowed me to explore the language space which I often spend my free time on. It required the most background research as I was unfamiliar with AR, its current market trends and capabilities. Additionally, I had trouble with figuring out how to explain the different Japanese writing systems and simplifying it to create a digestible brief.
During prototyping, it was difficult to mimic AR features to give users a realistic experience. I originally opted to have users rely completely on Google Translate for the AR portion but with guidance from my mentor, I explored different resources and solutions to create a prototype that conveyed AR cues and transitions.
Moving forward, I will continue iterating, testing and possibly explore more features that can utilize AR.
Building a poll feature to enhance engagement between hosts and their growing number of listeners
Establishing the online presence and branding for a small beauty brand dependent on word-of-mouth