Learnings from Zillow’s AI inflicted Loss of $10,000,000,000 Dollars

Michael Proksch, PhD
5 min readJan 17, 2022

--

What happened? In the earnings call on November 3rd, Zillow’s CEO Rich Barton announced to shut down Zillow’s iBuying operations, its home-flipping business, to cut losses of a couple hundred million dollars. This announcement will cut 25% of Zillow’s workforce, immediately caused Zillow’s share price to drop by 25% and inflicted a loss of equity of about $10 billion in market capitalization.

And that might not be the full extent of the story and haunt the company for years, as Zillow still owns thousands of unsold homes (Zillow bought 9,680 homes in the third quarter and sold only 3,032). Once celebrated as pandemic winner because of its central role in Americas soaring housing market, Zillow was not able to manage its operations successfully through the pandemic and points the finger at AI for not accurately predicting housing prices and therefore “unintentionally purchasing homes at higher prices than our current estimates of future selling prices” (Zillows, 2021).

While one could blame the pandemic as reason for that development, Zillow’s competitor Opendoor does not seem to have that problem, gaining over 100% on its stock price evaluation since the start of the pandemic as of October, 2021 (Yahoo Finance).

What could have led to Zillow’s outcome? Zillow entered the home-flipping business in late 2019. Its hopes were to use the massive data sets from its popular marketplace and profit from buying, fixing and selling homes in high volumes. But what started off as a good idea, quickly turned into a money pit during the pandemic. Zillow allowed homeowners to sell instantly for cash, rather than dealing with a painful bidding and closing process and getting caught up in a bank financing process on the buyer side and offered competitive prices based on recent historic pricing. This led to more bidding processes won at higher prices than expected. Taking into account the investment in repairs and maintenance during a tightened labor market and supply chain problems, Zillow found itself “drowning in a pool of underwater assets” (CNBC, 2021). An analysis of 650 Zillow-owned homes showed that two-thirds of the homes were for sale for less than the company bought them for (KeyBanc Capital Markets, October 2021) and produced an average loss of more than $80,000 per house (MarketWatch, 2021).

Barton said that Zillow was “unable to accurately forecast future home prices at different times in both directions by much more than we modeled as possible” and highlighted that “it boils down to […] our inability to have confidence in pricing in the future, enough confidence to put our own capital at risk”.

Zillow’s experience tells the story of how hard it can be to correctly apply AI and how its incorrect application can harm a company. However, is AI really the reason for Zillow’s experience or is that the wrong conclusion drawn from the story? After all AI is just a technology and it worked, just not as intended, right?

While the story of Zillow shows an extreme loss due to AI business application, a lot of companies just do not see a significant Return-on-Invest from AI. TechRepublic reports that 85% of AI projects fail (TechRebublic, 2019), VentureBeat titles that 87% of data science never make it into production (VentureBeat, 2019) and MITSloan reports that 7 out of 10 companies see no or minimal impact from AI (MITSloan, 2019). The assumption, however, that the missing Return on Investment of AI is technical in nature would be premature. More and more experts come to the conclusion that the challenges are beyond technology (Forbes, 2020).

The three major challenges a lot of companies new to AI struggle with are hard to see as they are so-called “unknown unknown’s” (a phrase used for “you do not know what you do not know” by one of the biggest strategists of our time, Donald Rumsfeld):

  1. a misunderstanding of what AI can do and how AI creates value,
  2. missing skills and knowledge beyond the area of technology and algorithms,
  3. not being able to foresee the risks of AI and how to control and monitor AI in a real-world environment.

The misunderstanding of AI

According to Accenture, 84% of business executives believe that AI is the solution to achieve their growth objectives and 75% of C-Suite executives believe that if they do not use and scale AI within their organizations in the near future, they risk going out of business (Accenture, 2019). However, many executives struggle to understand how AI actually works, how AI has the potential to create value and how they can deploy AI within their organization (Güngör, 2020). The result of that is that organizations struggle in getting the execution of AI projects right. A lot of enterprises lack AI strategy, pursue the wrong business cases, miss a data foundation and data strategy, and lack the culture and ability to experiment with AI (Deloitte, 2019).

Missing skills and knowledge

The topic of AI is usually considered as part of the STEM (Science, technology, engineering, and mathematics) discipline and our education sector’s focus is on exactly those areas to train our data scientists and engineers. However, that excludes topics from the university’s transcripts, which are crucial for the successful application of AI in the real world. Those topics are:

  • business, domain and process understanding,
  • data literacy and strategy,
  • data science project and stakeholder management,
  • organizational change management,
  • value creation through AI.

Furthermore, the narrow definition of AI as topic of technology, engineering and mathematics also creates a biased perception of performance measurement. While data scientists keep focusing on accuracy and model performance, the KPI’s that should be used are Return on Investment and successful business integration of AI solutions.

The risks of AI

When data scientists think about the risk of AI they usually focus on over fitting of models (which makes models non-generalizable in a real-world context) or the drift of features, which can create less accurate model predictions over time. However, risks of AI are manifold. Common risks of AI can be related to (in addition to Hall & Burt, 2019):

  • Missing Opaqueness (missing understanding what a model is doing),
  • Social discrimination (discrimination of groups of people),
  • Security vulnerabilities (AI can be compromised or misused),
  • Privacy harms (models can compromise individual privacy),
  • Model decay (model input changes over time can influence a model’s performance),
  • Wrong optimization target (targeting the wrong model outcome).

In the case of Zillow it might have been a combination of several factors, such as missing opaqueness, model decay during the time of the pandemic and the wrong model optimization target.

In conclusion, the application of AI is more complex than the integration of traditional software. It goes far beyond technology, engineering and mathematics. The job as an AI leader is very complex - guiding an organization through the transformation to the successful and responsible application of AI. AI provides amazing opportunities to create business value but only if used correctly.

--

--

Michael Proksch, PhD

As an AI enthusiast and SME, I get the chance to work with the Fortune 1000 enterprises in AI transformation and value creation; more at: www.michaelproksch.com