The AI Effect: From Delivering Content to Creating Impact
By changing our focus from content delivery to impact creation, we can unlock a new dimension of sustainable improvement for MSMEs around the world
Follow along on our site for all the latest updates
The technological landscape is constantly evolving, presenting new avenues to enhance our methods of empowering entrepreneurs around the world. In light of this, we recently embarked on a small, but enlightening experiment, juxtaposing the traditional static content delivery against a forward-thinking AI-based approach for a lesson on Business Value Propositions for our entrepreneurs in Thailand.
Rationale Behind the Experiment
Our motivation is to increase our focus on business impact, and move away from a focus on content delivery. Our early goals are to evaluate how AI and Large Language Models (LLMs) could contribute to this vision despite the technical and linguistic limitations of our target users.
Our traditional methodologies function in a very stable and programmatic manner, very similar to the defined flow showed below:
Meanwhile, an AI-based methodology is more akin to the probabilistic model shown below. With each user’s journey dynamically turning, progressing, or retreading content based on the real-time assessment of the user’s circumstances and needs. While it looks complicated to manage, that’s the power of passing the process to an AI to manage the flow. Content can dynamically learn to incorporate conversational context, and scale up or down to a level that suits the user’s proficiency. It would be impossible to pre-define all these paths for every potential use case, but with AI, we can offload that burden and focus on business impact instead.
Experiment Background
Interpretation
- Languages: Traditional content utilized manual creation and translation for Thai, Shan, and Burmese languages. Meanwhile, the AI method was solely delivered to a test audience in the Burmese language. As of writing, LLMs have proven incapable of providing sufficient output in many of our targeted languages — most notably Burmese, with LLM output being nonsensical. Therefore, we employed a mixed model methodology that uses LLM input/output in English, combined with a machine-based translation service to provide the outputs to the user in their native language. Our initial studies have shown that Burmese is one of the most difficult languages for this method, and it suffers from excessive formality. However, this benchmark provides an important foundation for the future scalability of AI solutions. If it is successful in Burmese, we can safely assume that over 100 world languages are well within reach today.
- Observation Count: 33 users engaged with the traditional content, compared to 46 for the AI-driven method. While these counts are far too low to draw significant conclusions, they represent a significant starting point and provide sufficient data to direct subsequent work.
- Average Messages: Traditional methods averaged 10.5 messages per user interaction, with AI streamlining this to 4.6 messages.
- Session Length: Traditional methods clocked in at an average of 11 minutes, whereas AI expedited these interactions, averaging at 7.3 minutes.
- In-Depth Engagement: Engagement metrics revealed 18 users fully completed the traditional content. Meanwhile, the AI method featured 20 engaged users, as measured by users with 5 or more sent messages to the AI bot. With a dynamic and personalized AI approach, this definition becomes more subjective. Our goal here was to measure how many users reached a stage of value-add.
Engaged User Results
Observations and Insights
Traditional Content: Though reliable, users experienced extended session times that could be further optimized for efficiency. In addition, users were only able to complete pre-defined pieces of the business value proposition, and were asked to put the pieces together on their own (see examples in the appendix). Moreover, users demonstrated a poor understanding of the content, and only picked the correct multiple-choice answer 40% of the time.
Dynamic AI Content: Users received business value propositions that were not only improved but also highly personalized. Despite shorter interactions, the quality and relevance of the interactions were amplified (see examples in the appendix). Some translation challenges arose, especially with slang, but the AI’s showed an impressive ability to remove and correct these instances. The AI format was not specifically designed to include a quiz component; however, it integrated Q&A style formats to assess user understanding before moving forwards or backwards in instruction.
Future Directions
Platform Refinement and UI: Our experiment was bound by certain constraints. For instance, users were delivered a link that took them from Facebook Messenger to an external web app, momentarily departing the familiar Facebook ecosystem — a factor that impacted our observation counts. Facebook warns users of leaving the ecosystem, which led to large lapse percentages at the top-of-the-funnel. Future strategies will seamlessly integrate within platforms like Facebook Messenger to overcome such challenges and provide a more familiar experience.
Warm State Model: Notably, in this experiment, the AI initiated interactions from a cold state, devoid of any prior user data. But after assimilating information from just 2–3 messages, it showcased enhanced retention and personalization, giving a glimpse of what’s to come. Future iterations will leverage a warm-state, equipped with pre-existing user data, to further elevate this efficacy and personalization from the first message of the lesson. Instead of pre-planned lessons, users will experience a variety of topical messages that address details of their specific circumstances and industry as well as dynamically scaling the difficulty up or down based on their needs.
Enriched Content: Furthermore, while the current AI delivery offered only textual replies, upcoming versions aim to be more multifaceted, integrating images and other engaging media to improve the user experience.
User Behavior Refinements: Finally, we are already incorporating AI fine-tunings that reflect the user behavior in our experimental population. For example, providing sample replies along with open-ended options give our target population an easy way to respond with rich context without having to manually and laboriously type long messages. As we get more experience with our users, we can continually refine and improve this experience to provide the AI with richer context while maintaining an easy-to-use interface for entrepreneurs.
User Privacy, Security, and AI Guardrails: One of the fears and possible holes with a stochastic and uncertain AI-driven approach is the potential for a model to deliver unintended material to users. OpenAI has given extensive time and research to preventing obviously harmful responses, but methods are still developing, and there are possible cultural gray areas that have escaped the existing purview. To manage this risk, we are implementing the developing best practices of Human-in-the-Loop (HITL), message regeneration, and quick-and-easy user feedback to quickly identify and mitigate any potential and unforeseen sources of error in the chatbot. Additionally, we plan to include improved best practices to mitigate the potential for AI hallucinations that could provide the user with inadvertently false information that could be damaging. All these guardrails must also take place in tandem with a keen focus on user privacy and security, with the utilization of Microsoft’s OpenAI API and data security practices leading the way for this important initiative.
Conclusion
We aspire to not only improve content delivery but to enhance its impact, personalization, and speed. It’s about crafting an experience that’s faster, better, and above all, more impactful. We’re just barely getting started on this journey, but it’s already clear that this technology offers an incredible opportunity to expand our reach and our impact simultaneously in ways that were unimaginable as short as a year ago.