From Manual Appraisals to Automated Valuation Models (AVMs)

What is your building worth?

These days this question is answered by valuers at financial institutions, municipalities, and appraisal companies, who take the time to study the building itself, its location and the state of the market. In short, each building is valued according to its own merits in the eyes of the individual who is performing the appraisal.

Valuers pursue largely the same methodology in processing valuations: relevant features that are known to correlate with price are taken into account, and perceptions of the area remain somewhat consistent in the market — nevertheless, there is still a lot of room for subjectivity in the appraisals. This results in a margin of error to the “real value” of a building, MCSI estimate this error margin at more than 10%.

Automated valuation models (AVMs) are tools designed to value buildings based on a historical database of comparable transactions and current physical infrastructure. AVMs are developed using the combination of geospatial and economic data, physical building characteristics and machine learning. The model that emerges is able to combat the time and cost inefficiencies of the standard valuation process and increase consistency of valuation methodologies, without losing any precision and possibly even improving it. Despite the emergence of such models in the real estate space, there has generally been reluctance within industry to change the approach from human-centred to automated, perhaps due to the cost of instigating the change, lack of knowledge of the approach, and the undesirable takeover of machines in what are, generally, ‘non-tech’ sectors.

Tech and the Built Environment are integrating in more ways than one. Look closely.

With all technologies, the positives and the negatives have to be carefully considered. AVMs offer a mechanism to take into account a large amount of information about each building, and process it using machine learning (or a simpler form of modeling), in order to understand how each characteristic of the building and its surroundings drives the value of the building. In theory this is what a human valuer does as well, but in practice, it is just impossible for a human being to calculate a value based on thousands of characteristics and hundreds of buildings.

The methodology behind standard appraisals can be made more thorough via the use of an AVM. Appraisals often review or reference 4–5 comparable transactions (“comps”), similar to the appraised building in terms of physical characteristics, location and general condition, and typically as recent as possible. This information is needed in order to determine the market value and sentiment at the time of the valuation.

However, it is difficult (if not impossible) to find truly comparable buildings, and an additional complication is taking the timing of the previous, comparable transaction into account.

The automation and computational power behind an AVM allows for the comparison of buildings based on which features they have in common, including both observable characteristics of a building as well as its direct surroundings, while mathematically “ignoring” other irrelevant factors. In other words, basically every single transacted building can be considered as a comp and an AVM is not limited just to the immediate neighborhood and current timeframe.

The main cost of setting up an AVM is the modeling infrastructure. Once the model is up and running, the valuation of a building takes only a few seconds (or shorter, by simply buying more processing power in the cloud…). Compare this to lengthy process of visiting the building on site, engaging with the documentation and the current state of the market; a process which can take several weeks, thus causing delays in related processes and transactions that need to take place. Of course, a valuation and its model is not static and as the markets change, the parameters of any valuation, including the AVM, must change too. In order always have the most accurate market valuation, the data collection and management process has to be dynamic and up to date, not static, taking place once per year.

It is especially interesting to contrast the types of error present in each methodology — traditional valuations versus AVMs. Within the standard appraisal methodology, the opportunity for human error as well as bias at an individual building level is large. An appraiser can, for example, give a property a higher value just because he/she has been looking at less valuable properties the day before (the well-known behavioral bias of anchoring) and vice versa. With the AVM, the opportunity for error is contained within the quality of the input data. Incorrect or outdated information can lead the model to make false predictions, the effects of which are applied to the whole portfolio of buildings that is valued.

In an age of automation, it is important to consider the implications of tools such as AVMs. The bodies that require large numbers of valuations on a regular basis have the most practical benefit of such a tool. Consider that municipalities must value and rate all their constituent buildings on a regular basis for the recalculation of taxes. In addition, the general centralized data collection on the features of the municipality might allow for more informed planning decisions. Banks need to frequently assess the value of their loan books, both internally and for granting mortgages.

But also, consider active REIT funds and hedge funds that can now assess the current market value of a REIT portfolio relative to its stock price… Other financial institutions and portfolio managers who hold large real estate stock, spread across wide geographic regions could also benefit from such a tool, as performing valuations might be both impractical and difficult — namely, understanding the various markets the buildings are situated in. AVMs could offer the means to standardize their portfolios, and understand them at a more granular level.

Building an AVM is data science, not rocket science.

To train the model correctly, one needs transaction data, which remains elusively hard to obtain in the opaque world of commercial real estate. And even when having access to the staggering data streams that are being generated today, that can sometimes not even be enough. Although it is inarguable that machines can process information and develop algorithms faster and better than humans, the quality and usefulness of the tool that comes out remains dependent on the human minds that went into it. Using a large set of actual transactions, enriched by hyperlocal information on amenities, demographics and economics, GeoPhy has developed an AVM for the commercial real estate market. The performance of the machine-learning driven AVM beats traditional valuations. We can’t wait to show this to the market. Stay tuned.