NASA’s Cloud Computing Story: From Inception to Impact

A cluster of computers at the NASA Advanced Supercomputing Facility (Image Credit: Tom Trower)

-Vishnu Chandrasenan, Engineering Intern at NASA Goddard Space Flight Center

For an agency with over 3500 websites, intranets, extranets and public facing applications, transitioning to Cloud Computing offered several landmark barriers unlike similar other organizations and corporations. NASA’s decision to kick off its migration journey to the Cloud opened avenues that formed the components of a classic cloud computing case study.

Space Flight Operation Facility at NASA JPL (Credit: NASA/JPL-Caltech)

NASA 2016 financial report mentions that the agency spends $1.4 billion annually on its IT assets. This spans over 550 different information systems that collect and process scientific and engineering data, control spacecraft and satellites, provide critical mission assurance and configure security for underlying operations.

With community wide code collaboration and public dissemination of information as its key agendas, NASA scripted a successful transition to Amazon Web Services (AWS) cloud platform, constituting one of the largest government migrations to the cloud. Today, NASA has nearly 5 million components of engineering and technical data as well as 110 applications on a combination of AWS Public Cloud and AWS Virtual Private Cloud (VPC). Many of NASA’s classified information has been safely shielded from this migration whirlwind, especially mechanisms related to the operation and control of the International Space Station. Effectively, the agency could envision 40% savings on OPEX (operating expenditures) within IT systems deployment and maintenance. NASA’s flagship website www.nasa.gov, for example, is now hosted on the AWS GovCloud.

The backstory: OpenStack was born

The cloud computing story at NASA is much more than its recent highly publicized relation with AWS. During the time when Infrastructure-as-a-Service (IaaS) was still in its conceptual phase, NASA themselves invested in a lucrative model to become a cloud service provider. This project was christened as the NASA Nebula open source cloud computing platform.

Nebula was a landmark project that was ahead of its time in unique ways. In a battle between contrasting ideologies to go commercial or remain open source, NASA decided to fully adopt an open source framework in deploying Nebula.

Nebula Cloud Container at NASA Ames Research Center (Credit: Gretchen Curtis)

One of the principle requirements was to build a Cloud Controller, which was a tool that could turn a single server or a server pool into distinct virtualized servers that developers can provision remotely using software. Today, this is ubiquitous with the presence of custom made tools for platforms like AWS, Microsoft Azure and IBM Cloud. Back in the day, developers at NASA, constrained with unavailability of necessary open-source framework, decided to themselves build a cloud controller from scratch on Python. Engineers at Rackspace Inc., a Texas based technology storage and hosting company sensed a meaningful collaboration with NASA’s Nebula team. The developers decided to meet and eventually agreed to collaborate on a larger scale. NASA already had a working Cloud Controller while Rackspace had cutting edge storage technology at its disposal. Together, NASA and Rackspace joined forces to individually develop code and together deploy it as an open source entity. The association of NASA and Rackspace was so successful that it was laying the foundation of what later came to be known as the widely accepted OpenStack cloud computing framework.

Riding the success wave: OpenStack’s phenomenal triumph

The OpenStack project became widely accepted by the technology community. Today, telecommunications companies are one of the heaviest users of this framework, which comprises of 20 million lines of code. More than 580 companies have invested support and over 40,000 people continue to play an active role in this community. After the conceptualization, NASA diversified out of the community, as it never wanted to be a cloud service provider more than a cloud service consumer.

Today, OpenStack has established its presence in companies like AT&T, Ericsson, IBM, Intel, China Telecom, Cisco, T-Mobile, Arista, Comcast, Google Cloud Platform, Juniper Networks, The Linux Foundation, Nokia, Samsung and others, becoming one of the most widely used open source cloud computing frameworks.

We can confidently infer that the modular architecture of OpenStack will continue to play a significant role in development of crucial technologies like next generation of wireless technologies and network virtualization, enabling cloud service providers to use OpenStack as a virtualization layer to develop APIs in a scalable way that can be standardized across data centers and deployments. It is clearly evident as to why the largest telecommunications companies are simultaneously investing heavily in 5G and OpenStack.

The Present: NASA’s rising association with AWS

Recently, at a breakneck speed, NASA successfully migrated portions of its huge IT infrastructure into a secure, virtualized, elastic and efficient cloud infrastructure deployed on Amazon Web Services (AWS). For AWS, this is an opportunity in exhibiting its regulatory compliance satisfying GovCloud with its security policies and processes.

NASA’s Ted Soderstrom at AWS re:Invent (Credit: AWS)

Using a combination of AWS DynamoDB (NoSQL database service), SQS, SNS, S3 and Elastic Compute Cloud, NASA Jet Propulsion Laboratory in Pasadena, California scripted phenomenal success by enabling large scale telemetry stream and data analysis for the Mars Curiosity Project. The project’s story formed one of AWS re:Invent 2014’s talk by Tom Soderstrom, NASA JPL’s Chief Technology Officer,

Using Polyphony, a modular workflow orchestration framework being implemented for streamlining the utilization of huge number of EC2 nodes, NASA could design a hybrid processing model. Excess capacity was provisioned on local machines and spare resources was let to run on supercomputing centers, which perfectly synced with the AWS Cloud. Writing just a single class, cloud architects could provision huge calculations on AWS EC2 instances. This was remarkable improvement in computing performance compared to previous technologies.

The Future: Exploring the Final Frontier

The NASA-ISRO Synthetic Aperture Radar (NISAR), one of the most expensive Earth radar imaging missions of the future, could potentially transmit data at 100 Terabytes per day and up to 100 Gigabytes per second. These rates are substantially bigger for traditional data centers to handle. This is a classic example where the computing power of AWS EC2 comes into play. Amazon’s association in future NASA missions like the James Webb Space Telescope (JWST), Europa Lander Mission, Mars Rover 2020 and the Asteroid Redirect Mission are paving way towards unprecedented amount of data gathering and simulation, which has never been previously possible. This begins an interesting phase of NASA’s advent back into cloud computing after its significant silent contribution during its inception.

Credit: NASA/JPL-Caltech

Can we possibly redirect an asteroid from its orbit? Can we find life on Jupiter’s moon Europa? Can we find the next potential Earth? Can we process unimaginable space data simulations and earth observation analysis to answer fundamental questions crucial to our existence? While NASA spearheads the scientific efforts to answer these questions, the enormous compute power for making this a reality lies with AWS, and here forms the formidable union of fruition.