Engineers face a colossal task documenting perishable data to record the damage to our infrastructure after a natural disaster, as they try to understand the consequences of the event and how the buildings performed. Teams spend weeks on-site and invest large amounts of resources, collecting thousands of photographs of buildings all day long, and in the evenings, they spend more hours trying to organize the data to figure out what they learned. Purdue’s Intelligent Infrastructure Systems Laboratory (IISL) is working to turn that time into minutes through artificial intelligence (AI) and make it a much more efficient process.
We consider our system to be a Netflix for Disaster Images. The systems available for managing this data today are not suitable for engineering use. They’re clumsy, and it’s difficult to find specific data unless you already know where it is. We’re trying to make the data instantly searchable and accessible by providing sorting and visualization capabilities — enabling valuable information to be uncovered quickly to assess damage, identify knowledge gaps, and improve our infrastructure to prepare for the next event.
Engineers need to be able to simply drop their images into a tool that will exploit Al to organize the data in the most useful ways. We have developed such a tool: ARIO — Automated Reconnaissance Image Organizer — to sort and assemble a large volume of images into reports with categories for each image, so people can see what happened to a particular building much more quickly. In addition, we provide tools to allow people to search more widely through those images and readily find what they’re seeking. The underlying framework is called VISER — Visual Structural Expertise Replicator — which will reproduce engineers’ ability to analyze and organize the data using AI.
For example, we easily could compare the problems observed after the 2017 Mexico City earthquake with those documented after the 2018 Taiwan earthquake, to check for any similarities in building codes and in design and construction techniques that might exacerbate or prevent certain types of damage. One then could build computer models of new buildings based on the findings, with the ultimate goal to improve standardized building codes.
At our lab, we focus on developing AI algorithms to empower structural engineers to learn from disasters much more efficiently. It’s all about accelerating and expanding the way we review and learn from the vast amounts of data documenting the performance of buildings, so we can identify and address gaps in building design codes and address them to mitigate the effects of disasters on our infrastructure.
There are many sources for this type of data around the world, including the Earthquake Engineering Research Institute, the National Institute of Standards and Technology, the Natural Hazards Engineering Research Infrastructure, and the DataCenter Hub at Purdue. However, extracting information generally is a labor-intensive, manual process; there is no capability for automatically organizing and searching the data similar to the system that we built and include in our tool.
Funded by the National Science Foundation’s Cyberinfrastructure for Sustained Scientific Innovation, we are training deep neural networks to recognize and organize the data to help people more easily find what they want — critical when datasets and architectural features differ widely from country to country and event to event. We’re trying to give engineers the power to extract the images they seek quickly by looking across time and space, historic events, and geographic regions.
Society is asking engineers to build some of the tallest structures in the history of the world, and this requires extending our designs well beyond what has been validated in the laboratory. Therefore, in reality, the world around us is our laboratory — we have to be able to learn from how these structures perform, so we can make our infrastructure resilient and safe for society. Then we must train the next generation in the fundamentals. It’s not enough to say computer models can just take care of this need. Whenever a natural disaster compromises the integrity of our buildings, we must be able to quickly determine what went wrong on the ground and what can be improved.
Shirley J. Dyke
Professor of Mechanical and Civil Engineering
School of Mechanical Engineering
College of Engineering, Purdue University