The DownLinQ
Published in

The DownLinQ

Robustness of Limited Training Data: Part 4

When it comes to the relationship between geospatial neural network performance and the amount of training data, do geographic differences matter? In a previous post, we examined this question by training the same building footprint model using various amounts of data from four different cities: Las Vegas, Paris, Shanghai, and Khartoum. That led to a plot (Figure 1) of performance for each city, either using a model trained on the city in question or using a model trained on the combined data of all four cities. In this post, we’ll take a closer look at two questions that went…

--

--

--

As of March 2021, CosmiQ Works has been folded into IQT Labs. An archive will remain here to showcase historical work from CosmiQ Works that took place July 2016 — March 2021.

Recommended from Medium

Digit Recognition: A Convolutional Neural Networks (CNN) Approach

Context matters…

The Problem with Gradient Boosting (Gradient Boosted Gremlins)

Automated Guitar Tablature Generation Using Neural Networks

COVID-19 & Machine Learning

The Role of Design in Machine Learning

New book: The Machine Learning Canvas

Neural Style Transfer - the math & code

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Daniel Hogan

Daniel Hogan

Daniel Hogan, PhD, is a data scientist at CosmiQ Works, an IQT Lab.

More from Medium

How Real is Your Frankenstein Dataset?

Statistical Indexes for Measuring Economic Inequality

Aviation Accidents: A Short Analysis Through Data

Two decades in data : Part 1