Everybody knows Python, right?!
When we started Pylon and began building our first conversational platform, we chose the tools that allowed everyone to contribute on day one: Python, Django, PostgreSQL, and Redis. As with any foray into uncharted territory, the first attempt takes more time and effort than anticipated. While we were getting closer to having our first assistant, Tasted, ready to go, we found we were also fighting our tools.
Python with Falcon was the easiest for all team members to understand, and our window for making changes was small; we stuck with Python for our last major push to get Tasted out the door.
After completing Tasted, we had delivered all of the features that were requested of the first version of our platform, but scaling it to introduce new assistants would have been prohibitively expensive. We had painted ourselves into a corner; we had delivered it, but we were hoping for a small number of dedicated users to grow with, so we could evolve without burning a lot of capital.
Sure enough, after getting Tasted on all of the platforms we started experiencing timeouts and scalability problems. We focused on refactoring and clean up, but even after all the obvious optimizations were done, the problems remained. We knew things had to change before we could move forward.
Come for the OTP…
With some distance from the platform evaluation and more breathing room to evaluate our options, the team got together and decided to focus on using our other assistant, The Bartender, as the testbed for evaluating Elixir. We chose Elixir over Clojure because of the successes of both WhatsApp and Bleacher Report as well as some convincing/inspiration from our friend Chris Keathley. We could see our conversation platform needing to be a high volume messaging platform, and a framework for building websites didn’t fit our situation. We went all in on Elixir for our new conversation platform.
I began working on porting our user service to Elixir and was able to design a proper OTP application with Ecto and GenServers on top of all of the existing logic we had developed on our Python platform. Concurrently, we built out the functionality for conversations and search. After about a month, we started to have something that was not only working but fast! We stopped worrying about the scales of traffic we were encountering, and with only a rudimentary understanding of the tools we were using, our new conversation platform was more robust, efficient, and better understood.
…Stay for the Velocity
The amazing part of our transition to Elixir wasn’t about the language or framework; it was about the effect it had on our team. After the initial bumps and bruises that come with learning new technologies, we were able to deliver the new version of The Bartender on schedule with new features and fewer resources in production. We spend more time refining our conversation model without spending an equal measure of effort on benchmarking, optimization, and tweaking the underlying libraries and frameworks. Even better, we saw an increase in usage on our multimodal companion service that validated our company’s position on how to build a useful conversational assistant.
We are consistently able to develop features to a higher degree of quality and with a deeper understanding of the code we have running in production. Immutable data, OTP supervision trees, and a community that focuses on ease-of-use made it a realistic goal for a small team to architect solutions and to do so quickly.
In the coming months, look for more articles from me and others going into some of the open source work we have been doing as part of Pylon in collaboration with the community. If you are more interested in the conversational assistant side, sign up for our newsletter.
To summarize, we traded some time to market (and took more than a few hits to our respective egos) getting Elixir into our technology stack at Pylon, but there is no way we would be able to provide our conversational assistants on aggressive timelines without it.