Analysts and AI: Bridging the Gap

Successful transition will require new training and standards

By Ben Conklin, Esri; Thomas Marchlevski, USAF; Tatyana Pasechnik, USAF; Scott Simmons, Open Geospatial Consortium; Joseph Sullivan, Ph.D., USC; Daniel Walton, Intterra; and Jeff Young, Lizard Tech

Artificial Intelligence (AI) has the potential to transform the role of geospatial intelligence (GEOINT) analysts, allowing analysts to expand capacity, create new analytic products, and get information out faster with more thorough and complete analysis. Successful transition from the commercial industry will require new user experiences for intelligence production and the introduction of new training and standards. The community will need to develop verification and validation processes to trust the results of these technologies. The final proof of success will be willing adoption by analysts. A major challenge for AI advocates is how to implement AI technologies in ways that do not require massive workforce retraining. Ideally, the technologies would go further and reduce the amount of specialized training needed to become a GEOINT analyst.

The GEOINT Community has also been long interested in AI-related technologies, but the excitement over AI has not lived up to expectations. AI technologies were too inaccurate to augment a human analyst, and the machine created more work for the human analyst. These early failures highlight that AI technologies must demonstrably improve life for analysts before being adopted into the mainstream.

Recent developments are more promising. Large companies such as Google, Microsoft, and Facebook are making major investments in AI, which covers many areas from reasoning, knowledge representation, perception, natural language processing, robotics, and machine learning (ML). ML has surged forward with recent developments in new algorithms in an area known as deep learning (DL). Much of this research has been focused around object recognition within imagery. The promise of these new approaches has reinvigorated the GEOINT Community’s interest in AI.

Successful adoption of AI technologies has huge potential to assist the national security mission. Analysts are unable to keep up with the explosion in geospatial data. From small sats to the Internet of Things (IoT), the world is constantly generating new geographic knowledge. The challenge is to assist the analyst through use of AI, so instead of competing with a machine, they can compete with their main adversary: time — transforming the unknown into the known in time to impact decision-making. With the proper application of AI technologies, analysts can be more productive and ensure their observations and foundation intelligence are up-to-date and accurate. They can derive new connections and insights from the data during their daily workflows. When working on predictive analytics, they can include possible outcomes to better understand situations. Reports and standard product lines become more up-to-date and of higher quality when routine work is delegated to a machine. The machine does the heavy lifting on basic tasks, and analysts take on the unique cognitive work.

Information science and AI have undergone tremendous advances in the last 20 years. DL has proven transformational in e-commerce applications of imagery, voice, and text analysis, and owes its success to the development of new algorithms modeled on human and animal cognitive and sensory processes, (e.g., convolutional neural networks (CNNs)), faster processing with hardware exploiting highly parallelized graphics processing units (GPUs), and a massive increase in the volume of data available to train neural networks. Today, the pace of advancement is only accelerating due to the high availability of cloud-based AI platforms and the monetization of AI applications driving increased interest and investment.

Several benchmarks are used in the research of DL techniques. One specific annual challenge is the ImageNet Large Scale Visual Recognition Challenges hosted by ImageNet, which sets a benchmark for accurate image recognition. In 2012, the winning team’s accuracy rate jumped from 74 to 84 percent by leveraging CNNs and GPUs. By 2015, the rate climbed to 96 percent. This type of progress is happening in all the related DL technology areas. This level of accuracy, and perhaps higher, is required if ML approaches are to be viable for automating many intelligence collection activities. In commercial applications, it is acceptable for valid conclusions to be missed. In intelligence applications, there is much less tolerance for that, thereby requiring accuracies that exceed human performance.

Technology Drivers for AI

There are three primary reasons for this advancement in technology. Computing power is increasing; with new cloud computing and with big data processing technologies, we can harness problems of much larger scale. The IoT has made more data available to analytic processes. Finally, new algorithms like CNNs are being proven and shared across a growing community of developers. Each of these advancements creates unique challenges for the GEOINT Community to realize the potential of AI.

Computing Power: The major developments in AI harness enormous computing infrastructure that the GEOINT Community is just now beginning to leverage. The addition of computing power to sensitive or classified systems will provide the computing resources needed to make AI practical. And to the degree that the computing infrastructure is compatible with commercial use cases, the easier it will be to modify for GEOINT use cases.

Internet of Things: The IoT continues to be a major source of new geospatial information. Commercial companies are collectors and hosts for this type of information. The algorithms they produce are tuned to work with the data they collect and uniquely have access to. The new algorithms and use cases that emerge from IoT applications could prove very useful for government applications as well.

New Algorithms: The development of new algorithms holds great promise, but these algorithms are primarily being created to support consumer problems and are not specific to the intelligence mission. For these same promises to be realized, new algorithms will need to be developed capable of answering intelligence questions and leveraging multiple sources of intelligence data.

Challenges in Transitioning Commercial Technology

In addition to algorithm development, there is another hidden problem. Large commercial companies have an army of developers who can write code and tune algorithms. They have a scaled development system. The government has experienced, trained analysts with advanced cognitive capabilities and intuition. A major challenge will be connecting these analysts with new user experiences for working with algorithms and datasets in easy-to-access ways. Making the technologies as transparent to the user as possible is ideal.

To make the transition to leveraging AI technologies for GEOINT, the analyst workforce and the specific objectives (productivity, new analysis, speed, completeness, etc.) must be at the forefront for those implementing. These new datasets and techniques will require a review of doctrine, organization, training, material, leadership and education, personnel, facilities, and policy. To gain value in AI, it must be integrated into the workforce and made a part of everyday life for analysts.

Changes in doctrine and organization will be required to create the correct structure for an AI-enabled workforce. The new types of data and technologies will stretch organizations that do not have adequate structure to support implementation. The new computing power will require cloud infrastructure with the personnel to manage and maintain. Improved productivity could result in smaller teams producing more output, or larger teams with fewer managers as exploitation functions become increasingly procedural.

Training and education will most likely have the largest potential impact on adoption. Implementation of these new technologies will shift some analysts from a processing role to a more cognitive one. They will have more time and access to more data to perform analysis and make inferences. This will be a new skill set for many analysts who have been trained in routine tasks such as feature identification. These new cognitive skills will have more value and will evolve as new algorithms are developed, requiring frequent retraining.

Facilities and policies will have to be adjusted; as more GEOINT comes from IoT sources, unclassified storage and processing environments will be essential. Even with possible cross-domain solutions, the volume of data collected from unclassified sources will continue to grow. The ability to work in such environments will be mandatory. This impacts security policies and physical facility infrastructure. With increasing automation and growing delivery of results from AI, it will become important for analysts to understand the nature of the output information. Source data for AI will be increasingly diverse in complexity, accuracy, and provenance; analysts must understand in a meaningful manner the relative reliability of what goes into automated analysis. How does AI assign geospatial context to unstructured data and what assumptions go into that process?

AI technologies also vary in the types of algorithms used, how the systems are trained, the fashion in which poor results are pruned from the output, and what validation occurs to identify “good” results. Analysts will have new responsibilities to develop training and validation data and will select appropriate tools or algorithms best suited for the task at hand.

AI results must be verified, validated, and vindicated through the actions of the analyst. Critical to this process will be the establishment of a common set of terms and measures to describe the sources of information, the assumptions made by the AI, the mechanism to confirm or reject interim results, and the measure of accuracy of validation.

Finally, vindication of results must inform the AI process to improve workflow. Analysts have long been accustomed to developing clear metrics for spatial accuracy of analytic products from sensors; they will now need to develop similar metrics to rank and qualify AI-derived results.

The final test of AI’s value will be analyst adoption. The reality is AI hype has been present for some time. Automated feature extraction has been the great promise of computing for GEOINT since imagery was first stored on a computer. ML may be the solution that makes this a reality. A user-first approach to developing AI applications has the highest likelihood of producing solutions analysts and their customers will accept.

Analysts will need access to the technology in an environment in which they can integrate it seamlessly into their daily workflows. Instead of creating more work for them, the technology must reduce their challenges. This means it will need to be integrated into existing user experiences, augmenting current tools and processes. The workforce will not be able to transform overnight; a gradual transition to proven technologies is more realistic. We will know AI has reached its potential when analysts demand it on their desktops instead of being dragged into the future.

This article is part of USGIF’s 2018 State & Future of GEOINT Report. Download the PDF to view the report in its entirety and to read this article with citations.

--

--

United States Geospatial Intelligence Foundation
The 2018 State and Future of GEOINT Report

USGIF is a 501c3 nonprofit educational foundation dedicated to promoting the geospatial intelligence tradecraft.