Hello, world.

When I fly, I always take a window seat. There’s something magical about seeing the world at 35,000 feet, whether it’s wheat fields, lakes, mountains, cities, forests, or clouds. A bird’s eye view of our blue dot is beautiful, but it also makes me wonder how much we really understand about the Earth as an organism and how humans affect that system.

At Descartes Labs, we believe that a better understanding of what’s happening on our planet is critical. If we want to change our planet for the better, a prerequisite is to have a global picture of what’s changing. We need to understand humanity’s environmental impact. We need to better optimize our food resources to feed the hungry. We need to know how cities operate as organisms so we can build the cities of the future. It’s exactly these kinds of insights that Descartes Labs is aiming to create.

Descartes Labs is a unique startup. We’re based in Los Alamos, NM, because our underlying technology and six founding scientists were incubated at Los Alamos National Laboratory with $15M and seven years of research. Though you probably associate Los Alamos with nuclear weaponry because of the Manhattan Project, there’s a lot of other core research going on at the Labs.

Humans make sense out of the world around us primarily through our eyes — we then map the video coming through our brain into words. Wouldn’t it be great if computers could do that and apply that thinking to our planet?

Our team of leading scientists are leaders in computer vision technology, mathematics, and high performance computing to bring it all together. In one form or another, our team has spent its career teaching computers to see the world.

Satellites have been taking pictures of the globe for over 40 years. Governments put up satellites that produce freely available data for scientific study, like the United States’ Landsat mission, which has put up seven satellites, starting in 1972. There are also a number of companies that have commercial constellations of satellites that provide imagery. Some of these satellites have extremely high resolution, like Digital Globe’s WordView-3, which takes pictures at up to 30-cm resolution, which means you can read the lettering on the side of a plane. And new startups like Planet Labs promise to take daily pictures of the entire land mass of the Earth with their constellation of cubesats — satellites that are about the size of a large loaf of bread.

There’s never been more data available about our planet. Unfortunately, less than one percent of satellite images are ever looked at.

Why? There are two reasons. First, the Earth is really, really big: 170M square kilometers of land area. Thousands of image analysts couldn’t possibly annotate everything interesting. Second, we have a long time series and what’s changing is just as important as what’s there. If you were just to look at a picture of the Amazonian rainforest in 2015, you’d have a much different picture than looking at it in 1975.

The Descartes Labs team was the first to apply deep learning to satellite imagery, which will enable a faster, more accurate understanding of the Earth. By using this breakthrough in deep learning and remote sensing — the science of understanding aerial imagery — the team at Descartes is aiming to tackle the problem of better understanding the Earth.

Deep learning is a technique that is modeled on the human brain and therefore, like the human brain, is highly adaptable. Satellites take pictures in the human visible range, but also have lots of other sensors. It turns out that deep learning algorithms don’t care about what humans can see: data from all of the sensors can be used by computers to identify features on the Earth. Thus, the Descartes Labs team has pushed deep learning beyond the boundaries of social media into new realms and new sources of data.

However, we see satellite imagery as only the beginning of our larger mission of teaching computers to see and understand the world around them. Companies like Google have made it their mission to “organize all of the world’s information,” but current computer science techniques for text and graph analysis fail when dealing with real world data like images and video. This is incredibly important because imagery and video are being created at an ever-increasing rate. There are tons of insights that can be derived not from just understanding a single dataset, but combining and correlating data sets to achieve a more complete view of the world.

For now, Descartes Labs is focused on giving the human race better information about being stewards of this planet. We’re looking at projects like understanding what crops are grown where, insight into our energy infrastructure and how cities grow and change over time. Expect us to regularly release interesting stories that we find as we’re exploring the data output by our algorithms.

Mark Johnson, CEO and co-founder

p.s. FastCompany did a great writeup of us.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.