IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. Photo Courtesy of IBM.

Taming the Tedious, Overcoming the Challenging, and Simply Improving Our Daily Lives: A View of Deep Learning

By Peter Hanson; Lisa Spuria; Todd M. Bacastow; Cordula A. Robinson, Ph.D.; Barry Tilton; Robert Albritton; David Foster; and Daniel Bonnel

This article was originally published in USGIF’s State & Future of GEOINT Report 2017. Download the full report here.

Since the Industrial Revolution, technological advancements have continuously changed the relationship between human and machine. Automating processes, saving time, and increasing efficiency have transformed tedious daily tasks and profoundly improved the way humans live. With major advances in computing and access to an explosion of data, the next major change is the evolution of deep learning (DL), a subset of machine learning (ML). Advancements in algorithm development techniques and the availability of affordable hardware, primarily graphics processing units (GPUs), have enabled DL to continue to improve, scale, and become increasingly useful for a variety of disciplines and communities beyond a select few scientists. Expediting the consumption, processing, and synthesizing of once inconceivable volumes and variety of data, DL offers analysts objective insight to the world and allows them to provide more complete and informed analysis in a fraction of the time. DL, like most technology, expands capabilities once thought out of reach.

Machines Complement Human Analysis

Starting with basic tasks, humans have grown to rely on the vast capabilities of machines. Gone is the era of visiting a bank to get money from a human teller, we now take for granted that ATMs are available anytime, anywhere. Although ATMs are very different from DL, human acceptance of the machine’s ability to augment human tasks had to begin somewhere. Fast-forward a few decades, and we find ourselves in a time of smarter machines that can take on more tasks, even those tasks that are cognitive in nature.

Sophisticated machines have now moved us into an era in which the capabilities to process and sort huge quantities of information, recall thousands of details, and, most importantly, to assess links, patterns, and identify alternatives in data analysis have significantly surpassed what humans can do alone. This new era of DL provides an opportunity to augment and complement analysts’ ability to make better decisions.

DL affords researchers, analysts, scientists, and even consumers many advantages.

· An objective analyst: While human analysts have always strived for objectivity in their analyses, providing an unbiased viewpoint continues to challenge even the best analysts. Humans can see alternatives or find and consider information that may refute their hypothesis. But personal biases often lie in the background of our consciousness, thereby clouding our ability to see everything objectively. Sometimes human emotions that are difficult to overcome have a role in our analyses; we become enamored with our own theories and look for information to support our thinking and throw out information that may contradict it. DL algorithms are designed to enhance human objectivity by automating the discovery and categorization of data, dispassionately bringing forth emerging correlations, new information, and weighted alternatives for consideration. Analysts make the final determination — but the power of an objective analytic partner offers richer results, a more balanced viewpoint, and a more rigorous justification for our own analysis.

· Pattern recognition: Training machines to recognize patterns opens entirely new opportunities to not only “offload” tedious tasks but to significantly enhance the ability to find new connections and insight from data that would have taken considerable effort on the part of the human. While traditional methods required a human to hardcode algorithms, thereby recognizing only what a human already knows to look for, DL adapts and refines its algorithms to recognize patterns in data and enable automated identification of anomalies, emerging trends, shifts in data, and connections in data not previously understood. This ability accelerates analytic efforts in medical research, anticipatory intelligence, automatic target recognition and image processing, financial markets, and many others. Pattern recognition does not replace the human’s interaction with the data but provides them with valuable new insights potentially more quickly than before.

· Alternatives in the data: The machine’s ability to rapidly assess huge amounts of data can assist the sorting process. In today’s information environment, analysts are pummeled with huge volumes of data. One myth regarding data is that it is all new and unique. Rather, analysts find themselves sifting through information that is duplicative, contradictory, erroneous, or only tangentially related to the topic of interest. This filtering process is burdensome to the human brain and hinders the ability to discern new and unique information. The unnoticed, truly new information is often the key to anticipating new and emerging trends and events. Analysts often find, when forensically reviewing data after an event, a missing piece of information that would have given them advanced insight. The answer is not always hidden in the data, but more often than not, what appears to be erroneous information is in fact critical. Using DL algorithms and the advantage of the machine’s data review capability, data can be rapidly sorted, filtered, and narrowed down to the key points. Those same algorithms can also identify potentially related data points that the human had not recognized or considered. The ability to quickly process and assess data to find new and unique patterns or anomalies, weigh and rank the alternatives, and calculate relationship probability makes DL immensely valuable to the analyst.

· Operating in the background: Machines can operate as “silent” aids to humans and can be finely tuned to operate “in the background.” Once the machine has been trained to understand the questions under review, the gaps in knowledge, and the data being sought, it can run automatically and bring forth the information required when needed. In some scenarios, one can see this partnership as more than just an aid but instead as a proactive process to cue humans to new information, warn of events, and point to emerging trends as necessary.

Continuing this move into a more cognitive realm in which machines do some thinking for us provides innumerable opportunities for advanced analysis and problem-solving. This move should be welcome, however, humans remain skeptical of this new world on many levels. Part of their discomfort relates to the technology and its level of maturity. Another part is something more fundamental — some feel threatened by these new capabilities and remain concerned that this evolution is about replacing the human. As referenced before, the ATM has enabled bank customers to have ready access to money on their own terms and schedules, freed up bank employees to deal with more difficult customer questions, and saved banks money by not employing as many tellers. But advances in DL go well beyond the ATM example. Deeper analytics that provide decision advantage, smarter solutions, and lifesaving answers are within our grasp and already in practice.

Current Uses of Deep Learning

Medical

Keeping up with the massive flow of research data on breast cancer is a challenge for scientists. In the 2016 Camelyon Grand Challenge, a team from Harvard Medical School’s Beth Israel Deaconess Medical Center used DL to identify metastatic breast cancer. The competition aims to determine how algorithms can help pathologists better identify cancer in lymph node images. The results validated the Harvard team’s hypothesis: Human analysis combined with DL achieved a 99.5 percent success rate, thus proving a pathologist’s individual performance can be improved when paired with artificial intelligence (AI) systems, signaling an important advance in the practice of identifying and treating cancer.

Automated Crop Management

Blue River Technology developed a DL solution called LettuceBot, which analyzes crops via tractor, photographing 5,000 young plants a minute and using algorithms and machine vision to identify each sprout as lettuce or a weed. LettuceBot helps farmers combat converging trends: the increasing resistance of weeds to herbicides and the decline of available chemical treatments. LettuceBot technology can help farmers reduce chemical use by 90 percent. LettuceBot is already used in fields that provide 10 percent of the lettuce supply in the United States.

Geographic Object-Based Image Analysis (GEOBIA)

DL has enabled Geographic Object-Based Image Analysis (GEOBIA) to be a powerful tool in efforts to detect and confirm conflict-mining activities in the Democratic Republic of the Congo (DRC). While standard change detection of remotely sensed data can identify potential mining sites, confirmation of the activities, characteristics, and likely purpose of those mining sites are difficult to ascertain without additional contextual knowledge. Development of geographic object-based models and analysis that provide this further contextual knowledge is labor intensive without the assistance of automated iterative image classification and the partition of imagery into image objects. DL enables GEOBIA approaches to scale for larger geographic study areas, more complex object-based modeling, and more accurate and sophisticated intelligence. In the DRC, DL resulted in targeted military action and focused humanitarian aid designed to combat illegal mining activity and reduce associated violence.

Automated Radio Frequency Spectrum Management

Scientists, engineers, and analysts are applying DL to acoustic wave propagation modeling, sonar analysis, and various segments of the radio frequency (RF) spectrum. Imagine a battlespace in which troops didn’t have to worry about spectrum management. An automated system would assign RF frequency, mitigate interference, and deny enemy intrusion into U.S. communications networks. DL can help declutter an increasingly crowded battlespace spectrum by learning radio and spectrum usage patterns of friendly forces, enemy forces, and local citizens. DL models can learn these patterns and make decisions on asset allocation much quicker than humans. As wireless technologies become more common in the battlespace, spectrum collaboration and frequency de-confliction will become impossible for humans to manage. DL models and other artificially intelligent systems will be required to augment human spectrum managers.

Crowdsourcing and Deep Learning

Data sets for training and testing remain a key input needed for DL since, like humans, this form of AI learns from examples and reinforcement. The training sets are the benchmarks to which the algorithms refer and learn. This training data can be generated by human input and curation, and crowdsourcing is one technique used to generate large amounts of training data.

In the past few years, crowdsourcing has become a reliable means to rapidly generate geospatial data when groups of people are organized around a common goal, such as response efforts during or following natural disasters. Online mapping platforms such as OpenStreetMap and Tomnod provide users with simple tools to identify and map features from overhead imagery. Such map data helps aid workers more quickly and completely assess damage and render assistance where it is most needed. Crowdsourced geospatial data sets can also serve as inputs to train DL models by providing examples of map layers such as buildings and roads that can be automatically extracted. Using DL to automate feature identification and extraction will improve the speed, scale, and frequency that such geospatial data can be generated, which, in turn, fuels downstream analytics.

Crowdsourcing can also be part of an iterative process to improve model performance. If the accuracy of such models is below a certain threshold, additional examples of crowdsourced data can be fed back into the algorithm to improve results via a process of active learning. As a DL algorithm trains on more data, the accuracy typically increases. This is similarly apparent with an increase in the number of passes through data. Predictions can be categorized as true negatives, false negatives, true positives, and false positives of truth, where the degree of false negatives demands the requirement for further refinement.

Beyond serving as a means to produce training data, DL can in turn improve crowdsourcing performance. Such algorithms can work behind the scenes to identify key contributors in the crowd, identify patterns in data or analyses, or focus the crowd on analytic tasks that require human input to improve the performance of the algorithm by focused tuning. The ability to use DL techniques to improve crowdsourcing is one of the ways to overcome quality control issues inherent to engaging large crowds of contributors with varying backgrounds and levels of experience.

Crowdsourcing serves as an important means to both enable DL with training data and improve the performance algorithms. ImageNet, one of the largest corpuses of labeled training data for computer vision, used Amazon’s Mechanical Turk to label millions of photos. Within the geospatial domain, similar crowds are used to tag objects in satellite imagery to generate training data. Though manual crowdsourcing has proven effective for data generation over relatively small geographic areas for short periods of time, automating the extraction of features will help geospatial data sets to be more complete and current. More importantly, automating the creation of foundational map features and straightforward analytic tasks will allow humans to contribute specialized knowledge and to focus on more complex analysis.

Unstructured Data

The randomness (or complete lack) of critical metadata is one of the biggest hurdles for GEOINT to move beyond exploiting large-format images from scheduled and controlled collectors to exploiting images from smartphone cameras and other mobile equipment. Similarly, many reports, stories, articles, and other text-based communications contain information from which location and time could be derived, but in which the information is not explicit. Both of these circumstances are opportunities for the employment of unstructured data exploitation techniques. Smart search algorithms designed to discover location and timing information from data should be developed and employed to provide context data for the nearly universal public data stream. This data can also be correlated with information from formal collectors such as commercial satellites for even greater utility.

Convolutional neural networks and other DL models greatly outperform traditional computer vision techniques at tasks such as image recognition, object detection, and feature extraction. Image analysis is a sweet spot for modern DL approaches. However, DL can be applied to many data formats other than ubiquitous image raster data types. Analysts can significantly speed up analytic workflow by applying DL models to radar, interferometric synthetic aperture radar, synthetic aperture radar, and other active signals-based data sources.

Conclusion

In an era of smarter machines that have eased the burden of tedious daily tasks, humans are faced with the need to accept systems with cognitive capabilities now being developed. We can resist the introduction of DL into our everyday lives and continue to drown in masses of data, or we can embrace the opportunity to reach new levels of data analysis, solve previously insurmountable problems, and enrich our daily lives with the assistance of DL algorithms.

To learn more about USGIF, visit the Foundation’s website and follow us on Facebook, Twitter, or LinkedIn.

--

--

United States Geospatial Intelligence Foundation
The State and Future of GEOINT 2017 report

USGIF is a 501c3 nonprofit educational foundation dedicated to promoting the geospatial intelligence tradecraft.