Image for post
Image for post

“If anything is good for pounding humility into you permanently, it’s the restaurant business.” — Anthony Bourdain

I’ve been incredibly fortunate to have grown up in the restaurant business. My dad worked his way up from a busboy in his 20s to an operator focused on scaling full-service and QSR concepts across multiple vertically-integrated restaurant groups. My mom has handled the intricacies of licensing and permitting for large-scale restaurant groups to sole proprietors for the past decade. My sister has been working in restaurants for the past 15 years, going from line cook to chef and operations director for full-service restaurants to large hospitality organizations. They all say the same thing: Food is a nightmare to scale, hard work, and an even harder business.


What happens when we can facetune to something “better”, generate to something familiar, and filter to something different.

Image for post
Image for post
Sources: Engadget, Dazed, Instagram, BeautyGAN

Over the past two weeks I’ve read multiple pieces that are on opposing sides of what we’re craving as consumers and in our digital self-expression. They all speak to a duality of how we are both starved for individuality as well as driven towards homogeny. These points are communicated to us through some interesting trends in technology, social media, and pop culture.

Image for post
Image for post
Beauty_Gan of Kylie Jenner

Beauty_GAN (not to be confused with BeautyGAN) is a sparsely documented implementation of a GAN that utilizes Instagram makeup trends and then generates new styles, which Dazed put on Kylie Jenner’s face. The resulting imagery feels very cherry picked (not a rarity based off of my experience with GANs) but the key point of the article is that these dataset sizes and inputs are continually built with human in the loop biases. …


Thoughts on our future avatar identities and how to build avatar-first products

Image for post
Image for post
Me using various avatar apps — From left to right: Memoji, Facemoji, FaceRig, FaceHub, WeMoji, Puppemoji

Over the past year or so, next-gen avatars becoming mainstream has gone from science fiction to a probable future. While previous avatars were digital characters controlled via input devices in worlds like Second Life, I define next-gen avatars as digital assets that are manipulated mostly in real-time from physical beings (a la Animojis). These avatars will be manipulated today mostly at the head/face level, but will eventually bridge to the full body.

Image for post
Image for post
Apple Animoji

The emergence of next-gen avatars has been driven by the proliferation of components like the depth sensing camera in the iPhone X and better edge compute for real-time face tracking. …


Hatsune Miku and Gorillaz were just the beginning.

Image for post
Image for post
From left to right, top to bottom: Galaxia, Miquela, Shudu, Blawko, Avalon, Brenn, Donny Red, Perl

Over the past year Americans have become increasingly aware of the digital celebrities living among us. They’ve infiltrated our social media feeds, twitch streams, advertising campaigns, and even fashion.

Having been immersed in this space for some time now, I figured it would be helpful to break down this developing market by talking through some of the history of digital celebrities, what’s happening now, and how they could change our world.

I broke this into four parts:

  • Part I: A primer on digital celebrities
  • Part II: How do digital celebrities progress?
  • Part III: The Cascading Effects of Digital Celebrities
  • Part IV: Remaining Questions and…


From a 2015 email to a seed investment: How Wayve uses novel machine learning to bring autonomous vehicles to the masses.

TL;DR — Compound led Wayve’s seed round. Wayve is an autonomous vehicle company utilizing a unique machine learning approach to drive cars autonomously, with little data, in novel environments.

In 2015 I read a research paper on semantic segmentation for scene understanding called SegNet. I sent one of the authors, Alex Kendall, an email congratulating him on his work and asking about commercialization plans.

Image for post
Image for post
Not my best cold email.

Alex said he was excited to get it running in a car soon, but wasn’t sure how.

Fast forward to 2017 when our Board Partner Drew Gray told me about a team of impressive University of Cambridge PhDs working on a new approach utilizing bleeding-edge machine learning techniques to get cars to drive themselves in novel environments. …


Turning daydreams of simulation into scalable synthetic data — Announcing Compound’s investment in AI.Reverie

Image for post
Image for post

A few years ago while spending time with autonomous vehicle (AV) engineering teams I kept asking what their largest bottlenecks were to improving autonomy. One constant I kept hearing was more annotated training data. The bottleneck of taking sensor feeds and having humans annotate them in order to train deep learning perception models was resulting in teams of hundreds and even thousands of humans performing this time-consuming task.

At the same time, I started to learn more about simulation and the value that synthetic data could bring to this pipeline. I wrote about this extensively here and set out to make an investment in the space with the thesis that synthetic data would be a key accelerator in the arms race towards autonomy for AVs, and that the addressable market for other robots and autonomous systems would expand over time. …


What the future of technology does to the future of trust.

Image for post
Image for post

I’ve been spending a lot of time thinking about how new forms of AI and machine learning will shape society over the next decade. One area I’ve gone deep on is how the widespread manipulation and generation of digital assets will manifest itself in our daily lives.

My main takeaway in my research thus far is this:

We are moving from a default trust society to a default skeptic society. Or put another way, while today humans largely trust what they hear/see, tomorrow they will not.

How did skepticism creep into society today?

In my lifetime I’ve seen this shift towards skepticism take place. ~10 years ago, people believed what they could see in relative high fidelity and what they were told in major media outlets. Photoshop made people skeptical of incomprehensible images. Smartphones, social media, and internet publishing has given to the rise of clickbait and “fake news” making society in 2018 increasingly more skeptical about internet content (and the platforms that push it) than in 2017. …


Evolving thoughts on where the venture scale opportunity lies at the intersection of robotics and food service.

Image for post
Image for post

The promise of robots creeping their way into the kitchen continues to inch closer to reality, however as a venture investor I’ve been thinking a lot about what the right implementation is for the unique beast that is the food service industry.

When we think about scaling traditional industries that have a robotics implementation, the question we have to answer, which Vijay Sundaram helped me think through, is: Does the robot help the restaurant group scale so that a traditionally non-venture scale business can achieve lower-cost scale due to an unfair tactical advantage?

Image for post
Image for post

The key thing with robotics and automation broadly is that they create efficiencies. Restaurants have plenty of inefficiencies vs. traditional tech companies. Margins are low, your product has a high time decay, and there are a wide range of persona profiles on the labor side. …


A quick note on our investment in Tia.

In a recent discussion related to the announcement of Tia’s Seed round, Erik Torenberg asked:

Investors in the company [@mhdempsey, @hunterwalk, @soopa, @soleio] can you talk about how you got there from an investment POV, and what, if any, initial reservations you had to get over? Consumer healthcare is hard.

My answer, taken directly from the On Deck Daily conversation is below.

So in standard VC fashion I have to say, first, it was the team. I’m not going to go into the overly long blog post about how amazing Carolyn Witte and Felicity Yost are (though I could), but both because of who they are as people, as well as their previous backgrounds, they are pretty uniquely suited to tackle a product and vision like this. …


Google, Tesla, Zoox, and many more plan to drive billions of miles autonomously with simulation

Image for post
Image for post

There are two major bottlenecks for building an autonomous vehicle (AV) today: Generating supervised data and training/testing your model.

AV engineers train their models by feeding them with enough data to sufficiently react to multiple scenarios. These range from sunlight hitting sensors at different angles to obstacles flying in front of the car to strange external driver behavior, and much more.

The problem is it isn’t easy or safe to replicate many scenarios in real-world environments.

A report by RAND found that AVs would have to be driven hundreds of millions of miles and sometimes billions of miles to demonstrate acceptable reliability.

About

Michael Dempsey

Want to live in a better future so investing in Frontier Technology @compoundvc. Learn more @ www.michaeldempsey.me

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store