Evolution of the Expert Opinion

How data availability and machine learning are making expert knowledge scalable

Data availability combined with machine learning is unlocking the next level of access to expert knowledge.

Experts and Non-Experts

To understand any qualitative measure we typically rely on experts when possible. Doctors, professors, attorneys, etc. can observe and report to a usually reliable degree of accuracy the truth in their fields. Unfortunately, individual experts aren’t scalable, so their knowledge is expensive and can be difficult to access.

What about areas where a high level of expertise isn’t necessary? With the internet we have seen crowdsourcing play a massive role in knowledge sharing. Crowdsourcing is an old practice that has picked up steam in the 21st century thanks to the internet. Where we used to rely exclusively on food critics or people we know to tell us if a restaurant is good or not, we can use what the masses have reported on Yelp. Instead of relying on teams of researchers to compile encyclopedias, the internet has given us Wikipedia. Rotten tomatoes has found a way to “expert-source” movie reviews, with crowds of movie critics.

We have seen fields of knowledge where we previously relied on experts or our personal networks reshaped by crowdsourcing. We can also see how the crowd system will become more merit-based with blockchain. Non-expert opinions are everywhere, all a company needs to do is capture and average them to make them useful. This makes qualitative knowledge regarding some subjects both inexpensive and accessible (even if attracting the crowd to the network can be difficult and expensive).

Non-expert opinions are plentiful and low quality. Expert opinions are scarce and high quality. We see that through crowdsourcing, we can make non-expert opinions valuable. We are starting to see a similar evolution of the expert opinion. Experts are becoming scalable through data availability and machine learning.

When an expert makes a decision or analysis, they are using the data they can observe. As more data becomes available, experts are training machine learning algorithms to accurately interpret its’ meaning. This is important for areas of knowledge that the crowd is not qualified for. Crowdsourcing can’t replace medical diagnoses (imagine the internet community voting on the reason someone isn’t feeling well) or evaluate water quality in developing countries, for example.

Real estate is a great example of where this battle is being fought. Automated Valuation Models (AVMs), that automatically appraise the value of real estate, are among the most valuable assets for massive real estate companies such as Zillow (Zestimate) and CoreLogic. These models are only possible due to companies combining their expert knowledge of the factors that affect real estate prices with the data they have access to across the country.

Example of inputs for an Automated Valuation Model

There are still problems with these models — real estate values are complex and have impactful factors that cannot currently be captured in data. This is why real estate appraisers (experts) are still necessary. As more data becomes available for AVM models, we can expect the knowledge gap to close.

Expert Opinion + Data + AI

When necessary data becomes available, experts can help train machine learning models to interpret the raw data and output meaning. This is going to create massive gains in the availability and affordability of knowledge in subjects that require high thresholds of understanding.

Crowdsourcing’s impact on low knowledge threshold subjects compared to Data + ML’s impact on high knowledge threshold subjects

Here at Spatial, we are following this model in a unique area, ethnography. Ethnography is the scientific practice of studying communities and cultures from the point of view of the subject. This type of study is valuable for any company trying to understand people and places. Our CEO, Lyden Foust, is an ethnographer who used to lead projects for corporations such as Procter & Gamble. He is an expert, who was paid to live in and study communities around the country. This practice is expensive and individual studies don’t scale. This makes ethnographic insights inaccessible to most companies. Crowdsourcing is not a solution, the practice requires training to deliver reliable, expert insights.

Spatial.ai is building an artificial intelligence solution to solve this problem. Location-based social media data has become available in the last decade. This data captures what people are experiencing from their perspective, the ideal input for ethnography. We design machine learning models to categorize and interpret this data based on the behaviors and personalities communities are expressing. Then, the output is made available across the entire United States for our partners.

More data is becoming increasingly available every day via mobile. Soon IoT data is likely to have an even greater impact. With the maturation of AI we expect to see more companies employing similar approaches to ours, in order to make expert knowledge completely scalable and available. There is a massive opportunity for experts in scarce fields to team up with data scientists and machine learning engineers.

Today for the first time Spatial.ai is offering quantified ethnographic analysis at scale to companies in real estate, automotive, advertising, and smart cities.

If you’d like to learn more about how to apply this in your business contact us through our website, www.spatial.ai.

Like what you read? Give J. Griffin Morris a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.