Virtuo appointed Babak Hodjat as scientific advisor to the data team
We are delighted to announce that we appointed Babak Hodjat as our scientific advisor at Virtuo.
Babak is a world-class AI engineer. He started his career in Natural Interaction at Dejima Inc., then on cognitive assistants and natural language user interfaces, the starting point of cognitive assistants such as Siri and Alexa. He then co-founded what later became Sentient, with Antoine Blondeau (currently a Series C investor at Virtuo) and Adam Cheyer, the co-inventor of Siri. A few years ago, before the VC boom, Sentient was the world’s most well-funded AI company. Babak Hodjat developed evolutionary algorithms for Genetic Finance and eventually became CEO of Sentient.
We’re very proud and humbled that he got interested in Virtuo and agreed to help us, Virtuo Data science team, to get to the next level.
We have conducted a first quarterly session at the beginning of Q2 ’22 and already learnt a lot from him that we would like to share here.
Getting ready for Feedback
Getting ready for advisory is virtuous by itself: you have to go through self-examination first. It starts with reverse engineering the principles we followed to develop a given algorithm/data infrastructure, and taking a step back at how we do things.
- Did we actually develop the algorithm the way we might have wanted retrospectively? Or according to principles we set for ourselves ab initio?
- Where did we spend time and effort?
We then write it down. That’s a crucial step. We write down:
- What’s the purpose of the algorithm, how we use it or we should use it with the other teams at Virtuo.
- What are the use-cases of the algorithm — which is something we often miss on a day-to-day basis.
- We detail how we proceeded to develop the algorithm, the mathematical models behind it and the technical workflows we put in place.
- What are the weaknesses of the algorithm? What are the next features we want to develop for the algorithm to match the current identified weaknesses?
Thus, even before presenting these analyses to our advisor, we took a step back to think about what we did: the mere idea of having an advisor, of being ready to confront our work to his criticism is virtuous by itself, because it already led to self-examination.
Babak’s feedback
Lesson n°1: double down on algorithms use
We usually don’t take enough effort to take a step back on whether we make enough operational use of algorithms, in particular for recommender algorithms. Babak introduced us to a framework for better leveraging such algorithms.
For instance, if one algorithm prescripts a decision for the future, what we do in our fleet sizing algorithm, the same algorithm should also
- Simulate decisions, in order to compare the working plan of the operational teams (what did they intend to buy/sell) and the plan we’re recommending them.
- Simulate the past with the actual decisions we’ve made (the supply we actually had), and confront it to the actual KPIs we’ve got at the end of the day, in order to properly measure the potential bias in the modeling (and possibly collapse them).
- It should provide a retrospective recommendation, what we should have done a year ago, what we should have done a year ago with a magic knowledge of what happened later, and compare these two retrospective recommendations to what we actually did, think about why what we did diverged from these optimal plans.
- Finally, we should implement some stochastic simulations in our models, to add confidence intervals.
Stated that way, it sounds obvious, but we frequently use recommender algorithms without even questioning what’s the value added of the algorithm. Such framework enables measuring the value add we can provide as data scientists.
Lesson n°2: think about backwards usages of prediction algorithms
In another similar algorithm, where we recommend acquisition spend to the marketing team, using the same framework helped us trace back marketing attribution: when we simulate a spending plan and remove a campaign, we can measure the impact on the ultimate KPIs we follow (sign up rate, customer base, revenue, etc.) and isolate the specifics of the campaign. This backwards usage of our marketing algorithm was really helpful to cope with the recent loss of attribution following Apple’s new privacy rules.
Lesson n°3: genetic algorithms are a game-changer in optimization models
He also gave us insights about genetic algorithms, and we started using such algorithms in our fleet sizing model. Until then, we struggled to get away from local maxima and we were too path-dependent in our fleet sizing optimization. Thanks to the genetic algorithms applied here, we were able to find an optimum much closer to the global optimum, keeping a speedy convergence.
We are about to conduct our next quarterly session, and we’re eager to start.