NGA
The Pathfinder
Published in
8 min readSep 7, 2017

--

By Anne Arians, National Geospatial-Intelligence Agency Research

Human problem-solving may be nearing superhuman speed. While estimates vary wildly about how many calculations per second the human brain can perform, it may be that the world’s fastest supercomputer can do more. And it can do those calculations 24 hours a day, 365 days a year, with larger memory capacity.

Computers are still far from replicating human brain capabilities, but human thought has never before been supported by so much supplemental power, and the technological wonder of high-performance computing is changing everything from social media to business services to national security intelligence. At the National Geospatial-Intelligence Agency, spatiotemporal computation at the massive scale made possible by supercomputing will drive unprecedented analytic transformation that will accelerate the decision advantage of the agency’s customers.

UNDERSTANDING HPC

In the simplest terms, supercomputers perform at extremely fast rates with greater processing capacity than other computers. The supercomputing revolution began in the early 1960s with the introduction of Seymour Cray’s CDC 6600 high-performance computer that processed over three million instructions per second. The fundamental distinction in Cray’s approach included a circular design that decreased signal delays and parallel processing that accomplished multiple tasks simultaneously.

The supercomputing performance of earlier decades is now equivalent to that available in today’s smartphones. Tasks that used to require rooms of hardware to accomplish can now be performed on devices that easily fit inside a shirt pocket. Cray’s parallel processing approach is still key, however; as high-performance computing is often achieved by linking multiple computers in parallel architecture that can simultaneously process huge volumes of data for scientific or engineering tasks — back to huge rooms of hardware, but now with exponentially greater computing capabilities.

The leading-edge HPC systems are continuously displaced by newcomers with new technologies. ISC Group, one of the world’s most prominent providers of services that drive leading-edge technologies, has tracked this international race and published a “Top 500” list since 1986. Just a few years ago, government and industry supercomputers in the United States held most of the higher positions. As of June 2017, U.S. systems still account for more than a third — 37 — of the top 100 on the list and five of the top 10. Of note, U.S. private-sector giant Facebook ranks at 31. The two top-ranked HPC systems on the list are both in China. All five U.S. systems in the top 10 are in Department of Energy national laboratories. (See box.)

In some ways the Top 500 list is increasingly an anachronism. The systems are ranked using a well-known floating-point benchmark that reflects the original dominant use of supercomputers for the modeling and analysis of physical systems. While such benchmarks have been invaluable in providing metrics that have driven HPC research and development, they have also constrained the architecture and technology options for HPC system designers. The HPC benchmarking community is moving beyond the traditional raw-speed benchmarks with new ones focused on data-intensive analysis of large graphs and on electrical power efficiency. The massive computing capabilities of major nontraditional HPC companies such as Google and Microsoft — absent from the Top 500 tally — and the increasingly prevalent computational demands of artificial intelligence and machine learning will change how these lists are assembled in the future.

RESHAPING WHAT’S POSSIBLE

Computational challenges have always demanded more computing power than was readily available. Challenging scenarios included designing GPS systems, weather estimation, pharmaceutical design, aircraft design, stewardship of our national nuclear stockpile, cryptography, and oil and gas exploration. Solving these critical challenges motivated the development of HPC systems, which demand innovations in underlying fabrication technologies, computing architectures and in the software that exploits their raw power.

After invention and rigorous evaluation at the forefront of HPC systems, the same capabilities eventually became available in the consumer marketplace. The raw speed of a Cray supercomputer of the 1980s is available today in any consumer smartphone. The visualization and data discovery technologies invented for HPC data analysis in the early 1990s underpin today’s web browsers. Data management and distribution, driven by massive science and computational experiments, led to technologies that provide us with social media and the delivery of entertainment on demand, wherever we are.

EXPLORING HPC AT NGA

Today’s social media providers routinely handle data and computing loads that far exceed existing demands within NGA. This is great news for NGA, its customers and its analysts, according NGA Research Director Peter Highnam, Ph.D. It means that others have created technologies that the agency’s tech teams can build on to provide robust automation and other capabilities to fully exploit the growing volume of diverse data in a secure and timely manner.

The volume of relevant data is growing at breakneck speed, according to NGA Director Robert Cardillo.

“…[W]hether our new persistent view of the world comes from space, air, sea or ground, in five years, there may be a million times the amount of geospatial data that we have today. Yes, a million times more,” Cardillo said in his keynote address at the 2017 GEOINT Symposium.

He declared that NGA does not fear “the data deluge.”

“Managed smartly and efficiently, it’s the solution [to understanding the world],” Cardillo said, “but it’s going to require us to change.”

The agency will process that vast amount of data with HPC tools, ready access to HPC-class resources and an HPC mindset, Highnam said. There is already a variety of HPC experiences within NGA, according to Highnam, albeit none at the scale of traditional supercomputing applications or at the scale of the social media companies. He said NGA has explored the use of HPC for space and airborne sensor data processing, machine learning, physical sciences modeling and data science. By partnering with several DOE national laboratories, NGA has also executed HPC experiments.

“We have substantial opportunities to put HPC to good use, positive experiences with specific applications, and a vibrant external HPC ecosystem to tap and build upon,” Highnam said.

HPC assets can be remote, such as Facebook’s data center in northern Sweden that takes advantage of environmental cooling and electrical power. HPC work can also be “farmed out” over multiple HPC systems for responsiveness and resilience — e.g., Amazon Web Services, Google. Traditional HPC computations often require large single-site work. Systems that support approaches such as computational steering place data analytics and visualization close to the end user, in addition to potentially remote capabilities.

“In short, NGA’s distributed architectures and cloud-based approach can be tailored to fully exploit HPC systems as tools,” Highnam said.

PARTNERING WITH HPC EXPERTS

NGA’s commitment to enhance its tradecraft with HPC capabilities led the agency to host a geocomputation summit, “Transforming National Security with High Performance Computing,” in February 2017. The DOE Oak Ridge National Laboratory was a close partner in the organization of the event. Technical experts from successful external organizations described why and how they use HPC to move the boundaries of the possible. Industry pioneers speaking in the summit included Vint Cerf of Google; Shaun Gleason and Jeff Nichols, both of ORNL; Doug Cutting of Cloudera; Bill Gropp of the National Center for Supercomputing Applications at the University of Illinois; Ryan Quick of PayPal; and Mark Dumas of PlanetRisk.

The event confirmed that both the private and public sectors are in an HPC revolution. Speakers emphasized the importance of learning mission problems, workflows, close partnerships with end users on specific targeted problems, and understanding how to achieve the greatest impact with the power of HPC tools. They advised that HPC success requires organizational commitment, cultural adaptation, risk tolerance and rigorous evaluation. Major companies that are significant users of HPC have invested years in building their teams and systems.

PREPARING TO MOVE OUT

NGA is preparing for its own high-speed future with an increasing emphasis on computational thinking and data science. Both are needed to fully exploit the power of HPC systems. The agency currently has limited internal expertise in the development, use and maintenance of HPC, according to Highnam, but he said that there is a substantial base of experience within the government, including the DOE laboratories, that NGA can tap as it continues to develop its own expert computing cadre.

NGA Research is applying the same approach to implementing HPC tools as it does to planning and executing other agency projects, according to Highnam, including “starting with the Heilmeier questions.” The questions, developed by George H. Heilmeier when he served as director of the Advanced Research Projects Agency in the 1970s, must be answered before pursing any proposed research. Among them are “What are you trying to do?” and “If you’re successful, what difference will it make?” (See box.)

The National Strategic Computing Initiative, an executive order signed July 2015, is also guiding NGA’s development of high-performance computing capabilities. The NSCI explicitly seeks to enhance U.S. leadership in HPC through collaboration of government, industry and academia. The order directs broad deployment of HPC capabilities that leverage public-private collaborations and transition results from research to operations.

NSCI is aggressive in seeking exascale computing (see “A new vocabulary” sidebar). It intends to ensure application to both data analytics as well as modeling and simulation, and a robust and enduring national HPC ecosystem of expertise, organizations and tools. DOE is leading the charge in terms of exascale computing investments.

The expected computational load from significant analytical investments in structured observation management, full-motion video and foundation GEOINT is a major driver for accelerating NGA’s higher computing capabilities, according to Highnam. He said substantial HPC capability is available for the agency to exploit as it meets the challenges of massive — and growing — data volume.

NGA and its mission partners in the National System for Geospatial Intelligence, as well as its international partners in the Allied System for Geospatial Intelligence, have deep mission knowledge that will serve to ensure that HPC is applied to critical current and emerging GEOINT challenges.

“With our partners we are investing in HPC resources, tools and expertise,” Highnam said. “Just as HPC transformed major industries, we expect that HPC will be a fundamental enabler for NGA.”

The new capabilities will help NGA’s partners and allies to achieve more and know more about the nation’s adversaries. They will be able to better anticipate activities and threats, and above all, to maintain the ever-crucial decision advantage.

--

--

NGA
The Pathfinder

The official account of the National Geospatial-Intelligence Agency.