Ground control to major policy makers: Unlocking user-centric AI’s potential for development — Part 1
Today, different satellite constellations capture an image of every place on Earth at least once a day. These satellite images can help local actors better understand and make decisions based on what is happening on the ground. Whether for infrastructure and, urban planning, environmental governance, or disaster recovery efforts, a bird’s eye views can reveal patterns not apparent from the ground.
Search for your place of interest on Google Earth Timelapse and you can see how things have changed on an annual basis from 1986–2018. This openly accessible platform distills imagery from thousands of images from NASA’s Landsat satellites to visualize how the surface of the Earth has changed over time. This process can be automated by using AI to classify imagery from these large datasets providing a powerful means to analyze, visualize, and track land use changes over time.
For example, the first two images show a high-level land use overview of Ho Chi Minh City, Vietnam, in 2018 and 1984. Both figures show the airport in the center of the city (green circle), but also the significant expansion in built-up area that has occurred in just over three decades. The final image zooms into the airport, showing the features provided by NASA’s 30m Landsat resolution. This is enough to discern major road patterns, buildings and green space blocks.
Google Timelapse Overview of Ho Chi Minh City in Vietnam, 2008 versus 1984
While a human can generally classify which parts of these images are green space, water, or even augmented structures, the sheer amount of imagery makes it impossible for human analysis to scale without algorithmic assistance. Artificial intelligence (AI) algorithms can help automate this process, making it more efficient. Decision-support platforms can connect stakeholders with the outputs of AI imagery analysis to help them understand the land cover composition of urban areas and where land use changes have occurred.
While generating satellite image classifications with AI is both art and science, for the end user, it boils down to a dish made of three main ingredients: satellite imagery, labeled “ground truth” data (i.e., “labelling” the class of interest), and AI algorithms. The question is then whether the results are fit for purpose. Do they provide some useful insights at an acceptable level of accuracy and resolution for the user on the ground? These considerations change the required ingredients.
In tech speak, user-centric means getting something that ultimately addresses an issue or process that works for an actual set of people, potentially with limited technical specialization or just basic digital literacy. The use case refers to the insight or decision that users can actually apply the tool to in practice. For example, if the particular use case demands discerning whether solar panels are on roofs, higher resolution imagery will be needed as opposed to if the objective is to get an overall sense of green spaces. But putting this tool directly in the hands of users is likely to stimulate creativity around issues that matter to people on the ground!
Satellite imagery from different satellites can vary by spatial resolution (e.g., some satellites capture 30 cm resolution images, 1000 times the resolution of the Landsat images shown above), spectral resolution or which bands of light are captured by the satellite (e.g. red, green, blue, and other non-visible bands such as infra-red), and temporal resolution (e.g. how often a satellite revisits a certain location). Labeled “ground truth” data refers point/pixels that have been correctly classified into a given scheme by human analysts. Finally, different AI algorithms must be trained to perform the desired classification task.
The fact that many PhDs are now being earned based on which models are most fit for purpose for any given issue suggests that most of us will be looking to get what works off the shelf.
Find out in part 2 of our blog how satellite technology and data can be used by real people for real life applications.
The image below is a land cover classification available online from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS), with each pixel covering an area of 500m. The image below shows urban concentration not just within Ho Chi Minh’s boundaries in 2011, stretching into the Mekong delta. User-centric tools can now help most everyone derive granular and timely insights online.
Editor’s Note: This blog is part 1 of a blog series. This blog aims to contribute to an on-going conversation as to how a set of converging cloud, AI, and satellite and aerial imaging technologies can be harnesseds in an end user-centric fashion to strengthen the local governance of public interest infrastructure, land, and natural resources. These types of GovTech decision-support platforms increasingly deliver fit for purpose capability, and significantly reduced user complexity and cost.