The Augmented Homeowner

Kyle Sandburg
Strategy Dynamics
Published in
7 min readNov 2, 2018

Computer vision and augmented reality can transform home ownership

An illustrative view of the Brady Bunch house

What is Computer Vision?

Computer Vision refers to how computers are able to use data and algorithms to decipher images based on attributes of the image to process that image into information. That information can then be used for a wide range of applications, including self-driving cars and image recognition.

Machines are still at the core simple machines. For computer vision to become a reality they simply classify an image as millions of pixels with each pixel having its own set of color values. The machine converts the color values into an array to describe the image (as seen below).

Source: Algorithmia

The computer will then pattern match the array with data it has seen in the past to classify the image. This is where machine learning comes into play. The system needs large amounts of data as inputs to train the model and then large computational power to convert the image into an array and then match that array with other data it has seen.

There are emerging approaches that require less training data, but the dominant approach is still a supervised learning model where training data powers a machine learning model

What is Augmented Reality?

Augmented reality is an overlay on your real world. This can include filters for photos like Snapchat, catching Pokemon, or a virtual tour guide. Augmented reality (I’ll include Mixed Reality in this) has the power to transform the way we interact with the world around us. While the concept is great, the implementation has struggled. The best applications have used the mobile phone, but these lack the full immersion that has been envisioned. The link below has a great video on what Augmented Reality could feel like.

Blippar uses computer vision to assess the physical space around a user and then overlays that environment with directions and images. Blippar created an experience in London’s Covent Garden where a user could scan a tag and see Santa fly overhead.

Why do we need it?

The concern I have is that this feels like the classic conundrum of a “solution looking for a problem”. While this is somewhat true, the initial use cases came from a need to handle tasks at a scale that would not be economically viable without a machine doing the work. Take image recognition as an example:

  • People can tag and identify the images of hundreds of images which helps to train a model
  • With the power of social networks, you can probably get to 50% which is even better training data
  • With computer vision, you can start to approach 100% which allows for a number of applications

It is this approach that allows Google to add layers of data to Google Maps. Here is a link for a fascinating read on how Google is using technology to create maps:

As you layer in the augmented reality experience it opens up a set of new opportunities. These experience could include remote diagnostics, training, coaching, guidance, and more.

Education is an area where with augmented reality you could provide hands-on experiences to thousands of people at a time vs. tens of people that get this experience in a University. This sort of 10x-1000x improvement could transform the way education is delivered, the access to quality education, and the prevailing education structure. If a professor could reach 1000x more students with a personalized experience you could see a world where the best professors become a brand in of themselves. The institutions would take a back seat and be essentially a production company, similar to a movie studio in the entertainment world.

In a world where we are short on trained contractors, especially licensed trades like plumbers and electricians, it is possible to envision a future where through AR/MR/VR solutions a tradesman could get the hands-on education required to earn a license in the comfort of their home. This experience can also make it more fun with interactive gamification.

These remote training programs can also change the way homeowners engage with contractors on a project. At the basic level, you could reduce the cost of a project by not requiring multiple pros to come out to the house to provide an estimate. A more advanced solution would allow a homeowner to see what different materials would look like in the home. Some of these solutions are starting to show up, including solutions from Lowe’s and Houzz where you can see products overlayed in your home.

Source: Lowe’s

There are a couple emerging examples that I have seen at Porch that start to point towards a couple other practical examples:

Lawn Vision example

Our problem was that we wanted to be able to have a digital solution to quote the cost of a lawn care package for a homeowner. The cost to mow a lawn is based on the size of the lawn. One approach is to ask the homeowner to enter the size of their lawn. The challenge here is that the average homeowner doesn’t know the size of their lot, let alone their lawn.

To solve this problem we started with using images from Google Earth to see the homeowner’s lot. We were able to use computer vision to assess the image of the property to determine the size of the yard. This data then allowed us to highlight the lawn for confirmation and provide an accurate estimate of the cost of the lawn care project.

Comparison of google maps with Porch Lawn Vision

Streem Assistant

Earlier this year Porch partnered with Streem, a technology solution to make it easy to get assistance in your home. We partnered with them to provide another layer of information to let our Home Assistants help homeowners with everything they need to get a project done. This is another great example of using computer vision to determine the object in the home and then layer on top of it additional information. In the image below the homeowner is getting information related to a new range in their home.

The Future

The idea of augmented reality is in its infancy. It is easy to imagine a future where instead of finding what you hope is the right YouTube video for your home project that you are able to have a remote specialist coach you through the project and highlight what you need to work on. I had the fortune this year to experience both AR and VR experiences related to work. The headsets will need to improve in terms of style and size to get massive adoption, but the software side is emerging as a powerful tool.

Source: Microsoft Hololens

In Closing

We are at the start of a new set of experiences that leverage the power of computational learning to assist people in a way that has been reserved for SciFi until now. Today the market is still mostly technology looking for a problem to solve. There are signs of this changing and the emergence of new computing paradigms that will transform and blur the lines of the way we engage with the physical world.

References

--

--

Kyle Sandburg
Strategy Dynamics

Like to play at the intersection of Sustainability, Technology, Product Design. Tweets represent my own opinions.