✈️ 🤖 ☀️ AI agents and custom GPT learnings
The last few months have been intense and transformative, and I can’t wait to share the highlights with you! ✨
Travels ✈️
I had the chance to enjoy the occasional annoyances of traveling again: missing connections, waiting hours at the airport, chasing lost luggage, and having my flight canceled because of thunderstorms. Despite all this, I still hit all the goals of my North American trip: meet the team, clients, and partners in Vancouver and New York, and make a shorter-than-planned stop in Toronto for Collision.
I was only there for a few hours, but I could feel the energy and notice the constant flow of people at Collision who were all impressed by our booth designs and innovative work!
Four years without visiting the North American team was too long — the value of face-to-face encounters cannot be underestimated; it was invigorating!
Next destination, Shanghai, by the end of the year!
Generative AI 🤖
As Feisal pointed out in our team briefing, we are thrilled that our client base has doubled (!) since the beginning of the year. I’m glad that our business development efforts are finally paying off!
I am deeply grateful for the relentless support from our team, which has been shuffling across multiple projects more than ever and has been able to adapt quickly.
Another milestone was our participation in the Generative AI (Interop) conference in Tokyo, where we explored the role of AI assistants in the future of workplaces (slides in Japanese and English). I’m fascinated by the topic of the transformation of the workplace with Generative AI technologies. More coming on this topic soon!
I also had the chance to organize a discussion about AI at the French Chamber of Commerce in Tokyo — an engaging roundtable format with 15 professionals.
ChatGPT for X learnings 💛
Building the ChatGPT customizations posed its unique set of challenges, and yet, it served as a fertile ground for learning. It’s easy to get excited with what you see on Twitter daily with clickbait titles such as: “Build an app with ChatGPT in 5 minutes!” — the reality is always slightly more complex.
Our data science team is drafting a post to dig into those issues in detail — but to give you a little bullet point formatted preview:
- Performance issues: agents built with Langchain or AutoGPT require many API calls to OpenAI, which can take more than 10 seconds before delivering a valuable output to the end user — unacceptable. We had to create our custom implementations instead.
- Streaming is not a luxury: from a UX perspective, we can’t afford to wait for the model to finish generating the output before presenting it to the user. We had to implement streaming across all the platforms that required it — web or mobile.
- Larger contexts are currently our main area of research. We are testing models that accept larger contexts and ensuring it has the proper attention level – without getting lost in the middle!
- Question / Answer interfaces come naturally, but there aren’t always the best approach to using LLMs since end-users don’t even know what to ask for 🙃 Asking the right question is sometimes harder than getting the correct answer!
- Monitoring — what works today might not work tomorrow. It’s difficult to predict the impact of a slight change in the prompt or when OpenAI updates its model. Monitoring tools and methodologies have started to emerge, which will help us keep better control of the quality.
What’s next ☀️
As we enter the second half of this year, I’m excited about the new developments in the AI industry and the new influx of hires in the various offices –their fresh perspectives and innovative ideas promise to invigorate our work atmosphere and contribute to our mission!
Remember to stay cool in the summer, step back, and reflect on the transformative power of AI. It’s not just a trend — it’s the future, and we are right here, shaping it with you.