Towards the end of 2014, I was preparing for a long trip to South Korea. Although I spent a comparable amount of time there earlier in the year, I was worried my underdeveloped ability to communicate in Korean would become an issue in a country where so few speak English. Local classes had developed my reading and writing, and spending time with locals helped with pronunciation, but it was becoming a challenge devising a good way to improve my speaking and listening abilities. Now that I was returning, I wanted to ensure I was prepared.
On the App Store, I found a few services which offered language exchange-style functionality, but so many of them were bloated with useless or poorly designed features which detracted from the actually good ideas. Some services were too confining in their approach, and others were too cluttered, typically trying to do everything and anything. As a software developer, a few interesting iOS8 technologies were emerging around that time, and I was itching to work on something which would give me the opportunity to explore them. With the issues I was facing improving my Korean, I decided this could be a good learning experience, and a opportunity as well to carve out my own solution.
Roughly a year later, I launched ALEC in the App Store. The service allows Korean and English speakers to find and speak with one another, as well as track improvement per-conversation through a simple feedback mechanism. While far from a perfect solution, I see it as more of a foundation for building on top of, which will evolve and mature over time. The insights and data I collect post launch will determine what to focus on next, and highlight issues other people have with learning or improving a second language.
With all that said, the lessons I learned just to get to this point were invaluable, and I think worth sharing. So here are a few of the highlights.
The Assistant Led Interface
Postulate 1: For some, speaking a second language with a native speaker can be a challenging or downright frustrating experience, which can lead to lower motivation, less practice, and eventually stagnated growth.
In exploring root causes and solutions for the problem, a concept which became very compelling was the idea of providing an “assistant” which could offer contextual advice at key steps of the journey. This idea would eventually manifest itself as the Assistive Language Exchange Companion, or ALEC for short.
In designing ALEC as a character, the goal was to have something which could be authoritative, yet friendly and welcoming. A prominent piece of inspiration was Sejong the Great, a Korean monarch responsible for the creation of the offical script used today in North and South Korea. As an exceptionally influential person, Sejong appears on coins and bank notes, and continues to be memorialized in huge statues all over Korea.
By tracing and simplifying defining features, ALEC began as a crude caricature of Sejong. Personality was explored incrementally through refinements to jaw shape, facial hair, and brow.
In the beginning, the product evolved adjacent to and sometimes almost as a direct consequence of this iterative process. While exploring how to capture essential elements of Sejong, suggestions for the product’s visual language emerged.
For example, the first few iterations (above) helped reveal needless complexity early on. Thin, tightly spaced strokes in the character’s face made the overall composition more difficult to digest—especially at smaller sizes. Eventually, pronouncing dominant features and removing unnecessary ones helped better define the character, which in turn helped to simplify and polish the app’s visual language.
The final result was a reduction of previous iterations, relying more on simple geometric shapes over realistic proportions, while still retaining defining features.
Bringing these simplified affordances to UI helped remove unnecessary clutter and increase contrast between content and actions.
This in turn strengethened the overall utility of elements in the app’s design language, and their effectiveness while exploring ALEC as an assistant element.
Designing the Assistant
Postulate 2: On the surface, language exchange clubs are a great opportunity to learn or improve a second language. However, overwhelming aspects prevent them from being of much use for either party involved.
A few of the issues I identified: they require an implicit time commitment, geographical availability, and structured guidance, to ensure productivity. In examining solutions from the perspective of building a mobile app, several conclusions were drawn:
- In addressing time commitment — in-person involvement in particular —the ability to participate in a conversation asynchronously would allow for a smaller up-front obligation, making the activity more accessible
- The confines of geographical availability could be eliminated simply by virtue of the internet, so being physically within reach of a conversation partner or language exchange club wouldn’t have to be a limitation
- Changing how information is accessed and represented could address any lack of structure, and an intermediary could be responsible for ensuring productivity between both parties
In exploring that last point, the idea of an assistant lined up perfectly. Through research, the following conclusions were drawn:
- The assistant should be useful without being confusing or hostile
- The assistant should be friendly without being annoying or inept
Bringing these conclusions to content meant providing non-intrusive context to busy workflows, and suggestions in areas requiring decision making. Areas deemed self-documenting wouldn’t require ALEC at all, in order to avoid excessive hand holding.
Although the functionality ALEC provides in the above examples is comparable to more common interface elements such as coachmarks and empty states, a large distinction should be made: Guidance through assistance can provide reassurance and comfort in the context of cold, static information.
Furthermore, as an interface element, assistance yields implications for the interfaces of the future. It’s foreseeable that when speech can be understood and comprehended by a computer, assistance will be conducive to entirely more empathetic approaches towards how people interact with software.
Building a Product
Feature-wise, external feedback played a critical role in defining what the final product offered. Early decisions determining what would make it for the first release were based off interviews and feedback from friends, colleagues, and users met through existing language learning services.
By June 2015, just enough functionality had been built for there to be value. Users could be matched off based off language experience and ability, then communicate with each other via voice-only messages. As well, ALEC was at a mature enough point where it’s usefuless was becoming apparent.
At this point, I began proactively seeking out feedback. Improvements to the experience and functionality were prioritized based off the response from testers who began to use the app with one another.
Leveraging the reach of several large online communities, users interested in learning English or Korean were surveyed and recruited. Involvement in discussions on forums such as Reddit’s Korean subreddit provided invaluable insights into pain points among the app’s target demographic.
Through surveying individual users, goals and experience was evaluated for selecting good candidates who were then granted access to beta releases. Short release sprints ensured only incremental changes, which were then measured for effectiveness, allowing for functionality and experience issues to be caught relatively quickly.
By scheduling new builds roughly a month in between, new ideas and features were focused and testers were kept interested and engaged. As well, dogfooding new features helped provide a concrete idea of what the primary goals were for the product.
By October 2015, experience and functionality felt cohesive, and shipping felt almost a bit overdue. A lesson learned from this experience was of how easy it is to get distracted from launching, while testing internally. Without a concrete date in place, it felt like the product’s growth began to stagnate, and changes non-essential to the minimum viable product began creeping their way in.
In an effort to “remove any feature, process, or effort that does not contribute directly to the learning you seek” (Eric Ries, The Lean Startup), I launched with the feature set defined by the third beta. After a few months of App Store rejections (mostly due to missing legal requirements and bad luck), version 1.0 finally went live, right in time for 2016.
During production, several larger ideas were explored, but scrapped for the sake of minimizing scope for version 1.0.
Idea 1: Tutors
An example of one of these ideas was giving tutors or teachers a different treatment in order for them to use the service as a means of recruiting new students. While it’s one thing speaking with other users, I expect some may require a more academic approach, which could mean consulting with a professional.
As a low-effort/high-impact tweak, this could mean not much more than providing greater visual prominence and higher rankings to a “tutor” tier of users.
Idea 2: Bots
An even more long term idea was to provide pronunciation and grammer correction through AI, and eventually allow users to have conversations directly with bots instead of other users. The implications for how this would affect learning (and knowledge acquisition in general) are enormous.
Unfortunately, a quick survey of existing tech revealed how unfeasible this would be, hilarious as the end result may be. Too much would be lost as messages would get passed from one error-prone service to another, only to come out the other end completely unintelligible. Certainly an idea for the future, but impractical given the state of speech technology to date.