Inovia CTO Summit with Amii/Deepmind @UofA

Hugues Lalancette
Inovia Conversations
5 min readJan 9, 2018
Unsplash / Ben White

We had a blast hosting our 6th annual CTO summit with Amii/DeepMind at UofA with 12 of our CTOs back in November. Over the years, we’ve found that having a small group of CTOs get together in a relaxed setting enables many invaluable Aha! moments and helps build lasting relationships across our community.

So many interesting questions have emerged in the last few years as machine learning has become a universal ingredient of the modern startup stack, that we thought of addressing them head on with some of the leading minds in the field — including special participation by Rich Sutton and Adam White of DeepMind. We also spent quality time with Amii/UofA researchers and realized in awe that some of their algorithms had developed superhuman performance at nearly all the Atari games we could think of… 🤓

Looking back at what was top of mind for CTOs, highlights can be categorized into three broad questions, which we’ll share responses to below:

  1. How do CTOs overcome new challenges specific to AI?
  2. How can startups build an edge using machine learning today?
  3. Where is the bleeding edge research at?

1. How do CTOs overcome new challenges specific to AI?

One challenge that AI-enabled startups face is that building products requires fundamental different approaches and infrastructures. Andrew Ng has a great quote capturing this idea:

“If a product manager designing a chatbot goes to an engineer and draws a wireframe and says, ‘Please make the speech bubbles look like that,’ the engineer is going to say, ‘I don’t care what the speech bubbles look like, I need to know what my chatbot is supposed to say.’”

In short, AI is reshaping how internal teams are organized and interact with one another from the ground up. In this state of flux, many CTOs described how reverting back to first-principles and effective communication can help create alignment between their teams and the rest of the organization.

Geordie Henderson, SVP Engineering & Product Operations at Bench had insightful thoughts on this, having previously experienced the changes required to scale technology teams at Hootsuite:

In short, technology teams need to be *visible* and *predictable*:

  • Visible: enable your company to see what R&D is doing
  • Predictable: develop a steady pace for shipping new features

To accomplish this, best practices Geordie saw work well in action included:

  • Keep periodic all hands meeting for new features demo, invite everyone
  • Be specific about what features can do to manage expectations vs. AI hype
  • Define measurable KPIs that are communicated back to the organization

Another interesting discussion surfaced on how integrating AI talent into a growth team architecture helps connect technology to business objectives. The conversation alluded to some of the best practices that were well covered by our friends at YC, who wrote an excellent piece on how to set up, staff and scale a growth program.

2. How can startups build an edge using machine learning today?

Jimoh Ovbiagele, who co-founded Ross Intelligence and is now leading technology and product as CTO, had a great quote summarizing his view on AI:

“We’re in the late 1800s, Karl Benz just invented the automobile, people are calling it a teleportation machine. We just call it a faster way to get from A to B.”

Jimoh also shared a great update on learnings at Ross. These learnings reflects a couple broad trends highlighting well how startups are building an edge using machine learning:

Access to unique and clean data is the biggest source of competitive advantage

  • This view is shared across our portfolio and is based on the belief that open source algorithms have already commoditized.
  • Startups also need enough *real* data to separate datasets between training, testing and validation. Model overfitting will emerge otherwise.

Measures of performance have tradeoffs

  • Defining performance is incredibly subjective. For instance, when searching for the right case of law, an algorithm could label everything as incorrect and achieve high accuracy without creating any customer value. Defining what to optimize for needs to be constantly fine-tuned.
  • Similarly, it is critical that performance measures are guided by user expectations. Anecdotally, our friends at AirBnB realized there was a huge gap between how their professional photographers ranked photos (optimizing for design, esthetic) and what users were actually looking for (optimizing for coziness).

Startups should focus on applied research (vs. fundamental research)

  • Focus on reproducing results achieved at the bleeding edge
  • Optimize for test speed — time required to build models should shrink from weeks to days over time
  • Invest in test automation, diagnostic tools and infra to ingest, clean and parse data as fast as possible

Sourcing talent requires deep local connections in AI hubs

  • Interestingly, many startups these days are finding great ROI in hiring master-level students in computer science or math (as opposed to PhDs).
  • Deep connections with AI superclusters enable partnerships with research organizations such as the one Ross and Amii just announced. These type partnerships are powerful ways to engage the open source community in creative ways such as building momentum around specific research topics or datasets.

3. Where is the bleeding edge research at?

Our afternoon sessions with Rich and Adam were undoubtedly the highlight of the day. What became clear is that a higher level of thinking is required to understand the ideas fueling the bleeding edge. To move from narrow to a more generalized form of AI, a radically different approach is required — one that is ultimately brings us closer to understanding how the human mind works.

As Rich liked to put it, supervised learning techniques creating most of the commercial value today don’t tend scale elegantly. Bottlenecks emerge when required to feed models large amount of cleaned, labelled data. Collecting, curating and labelling data is just painful —case in point.

This is precisely why reinforcement learning (and DeepMind) are booming — instead of having human experts feed labelled data to a complex system that already incorporates handcrafted learning techniques, algorithms such as AlphaZero have been able to achieve superhuman performance by reinforcement learning from games of self play — at Go, chess and now shogi (Japanese chess).

AlphaZero was given no prior domain knowledge other than the rules of the games and quickly (<24hrs) learned how to master them. This is huge since these programs don’t require labelled data — instead, they start from a blank slate, learn directly in the operating environment and we’re now starting to see them generalize across multiple adjacent games.

Another interesting insight that emerged from our discussion was around how the GAFAs of the world are using the breakneck speed of innovation to their advantage. For instance, research powerhouses like DeepMind open source their work knowing that by the time startups assimilate the latest techniques, they’ll still be two years ahead.

Overall, the conversation with Rich and Adam reinforced our shared belief that startups should focus on finding a product edge as opposed to trying to compete on core technology and fundamental research at the bleeding edge.

Recapping the year in AI is beyond the scope of this post but we’ve found this one by Denny Britz to be helpful in summarizing why reinforcement learning is huge — and will continue to be.

🎉 We‘d like to thank Cameron Schuler at Amii for his amazing contribution to the event, as well as Rich Sutton and Adam White for sitting down with our CTOs and inspiring us. Looking back at the evolution of AI over the last few years, we feel privileged to be building this community and helping bring new ideas to the world. 🎉

--

--