A peek at Earth Index design notes

Jeff Frankl
Earth Genome
Published in
6 min readApr 25, 2024

Design and implementation of the Earth Index Alpha is well under way. Last month the team met together to chart the course. Here are a selection of observations and challenges from my notes as we’ve grappled with building the UI.

User needs in two words: Find and monitor

The tools built at Earth Genome not only look and feel amazing, but must be designed to actually have an impact. To create the right kind of product, we need a deep understanding of what people need. In the past year, the team had conversations with dozens of potential users, ranging from climate scientists to indigenous communities to journalists. While the use cases vary from organization to organization, it has boiled down two primary user needs:

“Help me understand the scope and location of the issue so I can communicate it to my community.”

“Tell me when the issue changes so I can quickly respond.”

Or in just two words, find and monitor. For example, a community that is concerned with the environmental impact of gold mining initially wants to find all of the instances of mines so they can understand the scope of the problem. Then they want to monitor their environment to be notified when a new mining site appears (or before it appears).

Find the right workflow

When I first approached Earth Index UI design, I asked, “how can I make this technology as accessible as possible?” This tool needs to be intuitive to users who are not necessarily familiar with machine learning or geospatial technology. The first concept I tried: a wizard experience that walks users through the process, a step by step linear process that helpfully leads to good results.

An early concept drawing for a wizard-style, linear experience.

But as I learned more about Earth Index, I realized that the wizard workflow doesn’t fit. I expected that a user could click on a single object on the map and get back a full set of perfect results. In reality, the process is highly iterative, involves back and forth, decision making, and uncertainty. So, I restructured the concept from a wizard to a labeling tool with super powers.

Labeling with AI assistance: a user can select from several drawing tools to begin labeling the map. They can continue labeling manually or click “Auto Label” to generate AI predictions for visually similar areas.

At its foundation, Earth Index is a tool that allows users to label what they know and then get help from the model to speed up the process. In the future, as the underlying technology improves, I might revisit the wizard approach.

Refine gridded user interactions

The underlying data for Earth Index is organized in a grid of overlapping squares. This presents a challenge around user interaction: when a user clicks on the map to create a label, they are technically clicking on four squares because of the way they overlap.

An initial prototype of Earth Index made a call to the server each time the user clicked to calculate the square with the closest center to the cursor. A downside to that approach is that the user doesn’t see exactly which square they are labeling until the label gets created. I wanted to add a hover effect that highlighted the square before a user clicked it, to eliminate uncertainty and allow the user to label with confidence.

Enhancing user interactions: a responsive hover effect on Earth Index using vector tiles and Turf.js.

To make the hover effect feel snappy, I would need to calculate the closest rectangle on the frontend instead of the server. I worked with the brilliant Hutch Ingold to build a vector tile layer representation of the grid. By querying the vector tiles, which were already loaded on the frontend, I could query the four squares under the cursor without interacting with the server. I used Turf.js to calculate the distance between the centroid of those squares and the cursor; the shortest distance is the one that gets labeled. The final result feels responsive and takes the guesswork out of labeling.

Make sense of search results

When a researcher wants to understand the scale of an environmental issue, they aim to find every example of that issue in their area of interest. The results of the nearest neighbor search that powers our prediction model have a range of confidence in a match from 0–100%. Ideally there would be an algorithmically defined cutoff above which quality results are found. However, it is a human value judgment to select the boundary where results are accurate and adequate.

At the offsite last month, everyone explored concepts to address this issue in a mini design challenge. The best idea was a combination of histogram and filter that allows users to easily narrow down the predictions to an appropriate confidence level. As the user moves the slider, they can see the predictions immediately filter on the map, screening out poor results.

Users can quickly screen out poor results by using the filter to only see higher confidence predictions.

Signal need for high resolution imagery

The foundation models that power Earth Index predictions are trained on imagery from the Sentinel-2 satellite constellation, and Sentinel-2 is our default source for search. The benefits of Sentinel-2 imagery are cost and recency: it’s free, open source, and updated every within days. Earth Index can also process higher resolution imagery, given sufficient access and cost cover. For considering other sources, there are also considerations of effectiveness of leveraging models trained on imagery with different spectral consistency and georeferencing.

Sentinel-2 imagery (left) compared to higher resolution Mapbox imagery (right) in the Amazon andNiger

However the biggest challenge for the UI is communicating to the user that a search is not successful, and more than Sentinel-2 might be necessary. It is not immediately obvious from a particular use case that this might be so. For instance, artisanal gold mining in the Amazon stands out strongly from the surrounding forest; while in Niger, gold mining resembles many other linear features in the desert in low resolution imagery. I am still seeking a solution for helping the user understand what Sentinel-2 is good for, and when they need to ask for higher resolution.

How have you handled similar design challenges?

Have you faced similar design issues? How do you balance iterative task flows with a simple user experience? What kinds of signals do you provide users with to sift through search results, and choose among data sources? Have you addressed challenges around map interaction or used frontend geospatial libraries to improve performance? I’d love to hear from you if you’ve dealt with similar design challenges, and your ideas for how to handle them.

--

--

Jeff Frankl
Earth Genome

Product Designer / UX Engineer. Building complex, data-driven applications for mission-driven organizations.