CROWDSOURCING DISASTER RELIEF:

@RodrigoNieto
Homeland Security
Published in
50 min readJan 24, 2017

Leveraging Big Data to Coordinate Disaster Response and Recovery

The following document is the product of a CHDS project that asked homeland security experts to evaluate how a better use of data could improve homeland security. Because of the nature of the projects, we are making an exception to the editorial guidelines of the collection in order to leave the citations in the way they were offered, instead of using hyperlinks as we normally request.

BY

Anna Brookes (Editor)

Michael Davis

Deanna Kralick

Hoathi Y.T. Nguyen

Joanna Recor

Anna Schermerhorn-Collins

Introduction

“We all have to understand that there will never again be a major event in this country that won’t involve public participation. And the public participation will happen whether it’s managed or not.”

–Thad Allen

Since the United States was first created, we have struggled to develop a strategic and effective federal response to natural disasters. As the population grows and the climate changes,[1] people continue to move into vulnerable areas that are prone to disasters, such as flooding, erosion, earthquakes, tsunamis and wind events. Although the havoc created by disasters is not new, population increase and movement increases the scope, variety and frequency of disasters and their effects on our economy, lifestyle and homeland security.[2]

The structure of disaster response and recovery demands resilience and innovation. Traditional response by the government is slow, dependent upon community resiliency to “survive” for the at least the first forty-eight hours. The existence of social media has attacked that waiting period, both from publicly amplifying the demand for service and creating crowdsourcing- a medium that can provide massive data, on the locations and effects of the event. A downfall of crowdsourced data however can be a lack of accuracy and security.

Enter big data. The government has collected big data as long as it has existed. Sources include the public web, social media, mobile applications, federal, state, and local databases, as well as geospatial data, surveys, and Internet enabled devices and sensors. So, how can disaster response benefit from big data?

This project proposes to create an agile, real-time information framework to enable quicker response to and recovery from disasters in the United States utilizing the massive amount of almost real-time data gathered from crowdsourcing but combining it with government big data. We also seek to protect the spontaneity and impulsiveness connected with crowdsourcing by making our framework solid, but the platform temporary and specific to a disaster.

Crowdsourcing, at its best, is composed of volunteers who provide and create and organize. Disaster response is built upon volunteerism and resiliency. Non-government organizations (NGOs) like the American Red Cross have harnessed the power of volunteers by creating response frameworks that can be set up and dismantled quickly to harness the power of people. We believe that our program will do the same.

Executive Summary

Data is everywhere, flowing from all the networks around us. Sources include the public web, social media, mobile applications, federal, state, and local databases, as well as geospatial data, surveys, and Internet enabled devices and sensors. Big Data is the analysis of this collected data-a way to make meaning out of chaos. It is a way to “solve problems, improve well-being, and generate economic prosperity.”[3] In other words, Big Data is not just the collection of large amounts of data, but also the ability to use the data collected in a way that is meaningful for the user. Big Data is collected from almost everywhere — cell phones, cameras, wearable technologies, sensors in connected cities, emails, and any other digital sources today or in the future. It is collected by everyone and for everything. These enormous “haystacks” of data, along with technologically advanced computational power of the modern age, allows for unexpected discoveries and innovations that advance the overall quality of life for all people.

Advances in technology are exponential. Every new advance in technology expands the possibilities for data collection, storage, and processing. What does this mean to the government and what does this mean to the people? It means “big” data and even bigger “big” data policies. So is this a good thing or a bad thing? When we talk about big data and disaster relief and management, whatever helps disaster relief come sooner and whatever makes disaster management more efficient is good. Whatever hinders evacuations and whatever delays relief is bad. Current big data policies are doing a little bit of both, fortunately, and unfortunately.

Disaster management is evolving exponentially through the applications of Big Data. Predictive analytics lead to more accurate forecasting models of impending storms. Crisis mapping provides comprehensive disaster data to the public in real-time. Even relief organizations are using different toolsets provided by Big Data Technology in order to better meet their mission objectives. The response and relief efforts during Superstorm Sandy are a case in point. However, without collaboration and a means of sharing data, as disparate as the relief organizations were, so were the data sets among them.

While the focus of this white paper is the government’s use of big data to improve its natural disasters emergency response services, a wide range of other government agencies conducting other important work of governance and services also encounter the same legal and ethical issues. FEMA’s data sharing policy seems to strike the right balance among the competing interests of protecting individual’s privacy and advancing organizations’ goal of using big data to improve service delivery. FEMA’s policy recognizes that the notice and consent framework no longer adequately addresses privacy concerns because realistically no one reads the pages and pages of fine print typical of these notices. Thus, the context of data use must be taken into consideration when formulating a new regulatory scheme for the collection, using and sharing of big data. There is an individualized and measured approach in FEMA’s policy that should serve as the model for other agencies as we all wait for a uniform regulatory standard promulgated by either Congress or the Federal Trade Commission (FTC).

We chose to look at the success of social media in disaster response through social computing and explore adaptation of crowdsourcing for use in “pop-up” applications for individual disasters. Coordinating data from sources such as mapping, census figures, weather, sensing instruments and other sources with posts, photos and entries on such sites as Twitter and Facebook enables confirmation of the disaster damage, exposes pockets of destruction that may have gone unnoticed and prevents duplication of effort by government and NGO relief agencies and responders. Combining this information with verified big data information maintained by government agencies can provide the accuracy and confirmation missing in social media. By providing a basic framework to coordinate the response effort and protect privacy and data concerns, the government can partner with capable crowds. Crowdsourcing allows these groups to participate in the response to and recovery from disasters, forstering resiliency-a key concept of DHS preparedness and response. We seek to create an agile, real-time information framework to enable quicker response to and recovery from disasters in the United States utilizing the massive amount of almost real-time data gathered from crowdsourcing but combining it with government big data. We also seek to protect the spontaneity and impulsiveness connected with crowdsourcing by making our framework solid, but the platform temporary and specific to a disaster.

Databases, whether controlled by the government or private entities, are a large target for hackers. Our team foresees multiple dangers to the information within our database. There are Threats from adversaries such as foreign governments, organized criminal groups, and individual hackers combined with the overall inaccuracy associated with crowdsourced information could combine to damage the effectiveness and integrity of our project. The cybersecurity challenge for, then, is to meet the “CIA Triad” of information assurance (IA). The “CIA Triad” is defined by the National Institute for Standards and Technology (NIST) as the three security objectives for IT systems. We must take a look at the regulatory and legal environment under which our data will be housed which is subject to protective laws, regulations and executive orders.

We recognize the need for the federal government to respond quickly and effectively in as close to real-time as possible. We see the effectiveness of crowdsourced information in disasters, utilized by NGOs. Crowdsourced data can also be easily (and maliciously) formed and may, by its very nature, be inaccurate or duplicative. The government collects and stores massive amounts of data-big data. Coordinating the spontaneity of crowdsourced data with the big data produced by government agencies produces information that can be used to target disaster response where it is needed. While caution is needed to develop policies and legal frameworks to protect the program, most of policy already exists, including cybersecurity policy. Implementing this platform adds another tool to the federal disaster response toolbox, enhancing resiliency and easing effects.

INTRODUCTION TO GOVERNMENT BIG DATA AND HOW DHS PROTECTS ITS “HAYSTACKS”

By Joanna Recor

Introduction:

The ancient Library of Alexandria is the most legendary library of all time. This collection contained to total sum of the ancient Egypt’s knowledge and lore as well as the works of ancient scholars. Ptolemy | Soter, a successor of Alexander the Great, is said to have founded the library during his reign in the 3rd century BC. The library soon became the intellectual and scientific center of the ancient world and remained that way until its untimely destruction in the third century AD. From all over the Mediterranean, scholars and scientists traveled to Alexandria to study its works on everything from literature to mathematics to botany to astronomy. Why bring up an ancient library in a paper about Big Data, you ask? Well, the ancient Library of Alexandria is Big Data, perhaps not as big as Big Data is now, but for the ancient world, this was a big as it got. The Egyptians knew of its importance and often discussed the library’s protection.[4] You could even say that the ancient Egyptians understood the power of the collection and analysis of large amounts of data. It was their harnessing of this information that elevated ancient Egypt to a world power.

Today, the expansive amount of readily available data is also leading a transformation, a digital transformation, and whoever can manage and analyze this data will also be elevated to a world power. Data is everywhere, flowing from all the networks around us, and the choices we make can be discerned through the collection, organization, and taxonomy of this data. What is even more important than explaining the choices we made already, is predicting the choices we will make in the future through the analysis of Big Data. The U.S. government maintains one of the world’s largest collections of Big Data in the world. This data is collected and analyzed to extract insights in order to make better decisions in support of national security goals, scientific discovery, and to help push economic growth. In order for the U.S. government to maintain its status as a world power, it must, like ancient Egypt, harness the data collection and make it available for analytic tools, while at the same time secure it from manipulation and theft.

What is Big Data?

Since the earliest forms of writing to modern data centers, humans have been gathering and analyzing data to make decisions. Think of the Doomsday Book ordered by William the Conqueror to analyze his kingdom’s wealth; U.S. Census reports used to analyze the countries human assets; or even the evolution of centralized computing systems needed by businesses and private citizens alike to analyze the ever increasing amounts of available data. The rise in technology in the past century has led to the overflow of data, which has also lead to ever increasingly more sophisticated storage systems. While the increase in information was beneficial for humankind, the ability to store and retrieve this data quickly became a problem for the libraries and institutions entrusted with it; therefore, innovation created automation and advanced storage systems out of this need. Big Data is the analysis of this collected data.

Big Data is a way to make meaning out of chaos and a way of making “finding a needle in a haystack” practical.[5] It is a way to “solve problems, improve well-being, and generate economic prosperity.”[6] In other words, Big Data is not just the collection of large amounts of data, but also the ability to use the data collected in a way that is meaningful for the user — something that is on an unbounded trajectory due to the ever increasing processing capabilities and connectedness of our modern age. The sources of the data collected also continue to grow. These sources include the public web, social media, mobile applications, federal, state, and local databases, as well as geospatial data, surveys, and Internet enabled devices and sensors.[7] In fact, due to the “Internet of Things” (IoT), Big Data is collected from almost everywhere — cell phones, cameras, wearable technologies, sensors in connected cities, emails, and any other digital sources today or in the future. It is collected by everyone and for everything — advertising companies trying to sell their goods, scientists gathering information about a certain theory, weather forecasters attempting to predict the next super storm. These enormous “haystacks” of data, along with technologically advanced computational power of the modern age, allows for unexpected discoveries and innovations that advance the overall quality of life for all people.

Along with the increase in data collection, the analysis of the data has also increased to the point where it is happening now in real time. This means that while a person is connected to the web, there is an ever growing potential for the analytics to affect that person’s immediate decisions.[8] Any public online activity or sensor activity can be analyzed and then direct the individual’s next decision, something that seems like it comes out of a science fiction novel, but if one thinks about it in a different way — real time data analytics make it possible to save lives. For example, vehicles connected to web would be able to send automatic updates to the next vehicle, making accidents less likely to occur. Another example of real time, Big Data analytics that is saving lives is a study that synthesized data samples from neonatal care units. It predicted which newborns were more likely to contract infectious diseases.[9] These newborns were then placed more frequently on doctors’ rounds so that they would be able to monitor any temperature or heart-rate increases which could indicate that a newborn was becoming ill.

Humans cannot help but analyze the data in front of them to make better decisions about the world around them. The only things that have changed over time since the Doomsday Book or since the inception of the U.S. Census reports, is the amount of data collected in those “haystacks” and the speed in which the data can be processed. Who better to look at these piles of data with the most modern technology available, but the federal government?

Big Data and Government Decision Makers

Big Data presents a challenge for government decision makers since agencies are still siloed into their separate areas of responsibilities. Each decision maker tends to have his or her own focus and make links to that focus out of the Big Data. In order to keep costs down, the data is kept in “data pools” from which each user can mine the desired information. Keeping the Big Data in one location as well as not limiting it to one area makes the data more useful to more agencies. Federal agencies currently use Big Data analytics in a multitude of ways. For example, it is used for everything from fraud detection and financial markets analysis, to fighting crime, to environmental protection and energy exploration.

Big Data and Homeland Security

Over two million travelers fly into or within the United States every day and over a million cross at land borders.[10] The Department of Homeland Security (DHS) is charged with verifying the identity of each of those travelers and also with the determination of whether or not they might pose a security threat to the country. This is DHS’s “haystack” — a classic Big Data problem. Along with being tasked to pick through this huge amount of data, DHS must at the same time follow the regulations outlined in a 2012 Presidential Memorandum for Managing Government Records.[11] This memorandum mandated that government agencies need to transition to electronic records in order to increase transparency and save taxpayer dollars — this includes maintaining all service email records and an eventual transition to completely going digital for all permanent records by the end of 2019.[12]

DHS began by ensuring that the owners of the data systems along with representatives from privacy, civil liberties, and legal offices. This group decided what access and how to grant access to each group of users in DHS. Then they set out to code the information with a specific set of tags based on the protection levels of the data. This tagging of information is critical to DHS’s system because the agency can then keep track of where the data came from and where it went and under what authority.

In order to ensure DHS maintains its efficiency and lawfully uses the massive amounts of data it collects, the Agency has oversight from the office of the Chief Information Officer. This office’s mission is to maintain privacy, civil liberties, and legal oversight and focuses on whether the individual or group accessing the information has an official “need to know” the information. Although some of the information collected is classified, much of the information maintained by DHS is unclassified but considered sensitive information such as Private Identity Information (PII). PII includes birthdays, social security numbers, etc. which could identify an individual. This office of the Chief Information Officer then analyzes the tagged data collected and who is looking at it to safeguard against theft or misuse.

The data itself is grouped into three categories.[13] These categories are the biographical data (name, date of birth, nationality); extended biographical information such as addresses, email addresses, or phone numbers; and then the most sensitive records, those of the service themselves that give detailed accounts of interceptions or encounters with the individual. The Chief Information Officer then decides who in DHS gets to access what information to do their job. For example, a technician for U.S. Border Patrol may need only to know someone’s name and date of birth to see if that individual is wanted; however, someone working for Homeland Security Investigations might need to see prior inspection data and someone’s financial data to determine if that individual has committed a crime. Either way, the information from the various “haystack” is protected and given only to the worker who has the “need to know”.

Conclusion

In order for modern day Big Data not to meet the same fate as the ancient Library of Alexandria (dissemination into smaller repositories after a catastrophic incident), DHS needs to maintain its regulation of who has access to the data and who should have access to the data. Since technologies are constantly improving and evolving, DHS should also look into these new technologies and evolve along with them. Otherwise Big Data will exceed the ability to be analyzed by the government and become totally useless. If there was not analysis going on, why collect the data in the first place?

BIBLIOGRAPHY

Adshead, Antony. “Big Data Storage: Defining Big Data and the Type of Storage It Needs.” ComputerWeekly, April 2013. http://www.computerweekly.com/podcast/Big-data-storage-Defining-big-data-and-the-type-of-storage-it-needs.

Brooks, Chuck, and David Logsdon. “The Alchemy of Big Data in Government.” Text. TheHill, December 21, 2015. http://thehill.com/blogs/pundits-blog/technology/263890-the-alchemy-of-big-data-in-government.

Center, Electronic Privacy Information. “EPIC — Big Data and the Future of Privacy.” Accessed January 7, 2017. https://epic.org/privacy/big-data/.

Executive Office of the President. “Big Data: Seizing Opportunities, Preverving Values,” May 2014. https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf.

FedTech Staff. “4 Infrastructure Requirements for Any Big Data Initiative.” Text. FedTech, December 22, 2016. http://www.fedtechmagazine.com/article/2016/12/4-infrastructure-requirements-any-big-data-initiative.

Helms, Josh. “IBM Center for The Business of Government: Five Examples of How Federal Agencies Use Big Data.” Accessed January 7, 2017. http://www.businessofgovernment.org/BigData3Blog.html.

Kridel, Tim. “Storage Wars: How the Federal Government Is Tackling Data Growth.” Text. FedTech, June 10, 2016. http://www.fedtechmagazine.com/article/2016/06/storage-wars-how-federal-government-tackling-data-growth.

Marzullo, Keith. “Administration Issues Strategic Plan for Big Data Research and Development.” Whitehouse.gov, May 23, 2016. https://www.whitehouse.gov/blog/2016/05/23/administration-issues-strategic-plan-big-data-research-and-development.

Networking and Information Technology Research and Development Subcommittee. “The Federal Big Data Research and Development Strategic Plan: The Networking and Information Technology Research and Development Program,” May 2016. https://www.whitehouse.gov/sites/default/files/microsites/ostp/NSTC/bigdatardstrategicplan-nitrd_final-051916.pdf.

O’Brien, Adelaide. “The Impact of Big Data on Government,” October 2012. http://www.ironmountain.com/Knowledge-Center/Reference-Library/View-by-Document-Type/White-Papers-Briefs/Sponsored/IDC/The-Impact-of-Big-Data-on-Government.aspx.

Office of the Press Secretary. “Fact Sheet: Cybersecurity National Action Plan.” Whitehouse.gov, February 9, 2016. https://www.whitehouse.gov/the-press-office/2016/02/09/fact-sheet-cybersecurity-national-action-plan.

Silwa, Carol. “Understanding Stripped-down Hyperscale Storage for Big Data Use Cases.” SearchStorage, March 2013. http://searchstorage.techtarget.com/podcast/Understanding-stripped-down-hyperscale-storage-for-big-data-use-cases.

“The President’s National Cybersecurity Plan: What You Need to Know.” Whitehouse.gov, February 9, 2016. https://www.whitehouse.gov/blog/2016/02/09/presidents-national-cybersecurity-plan-what-you-need-know.

van Rijmenam, Mark. “7 Important Big Data Trends for 2016.” Datafloq, December 2016. https://datafloq.com/read/7-big-data-trends-for-2016/1699.

— — — . “The Top 7 Big Data Trends for 2017.” Datafloq. Accessed January 7, 2017. https://datafloq.com/read/the-top-7-big-data-trends-for-2017/2493.

CHAPTER 2

BIG DATA POLICIES: CURRENT POLICIES HELPING AND HINDERING DEVELOPMENT OF DIGITAL DATA DRIVEN DECISION MAKING AND THE POTENTIAL IMPLICATIONS FOR DISASTER MANAGEMENT

By Deanna Kralick

Introduction

Advances in technology are exponential. Every new advance in technology expands the possibilities for data collection, storage, and processing. So now we are collecting biometric data at the border. Now we are storing biometric data. And now we are analyzing biometric data. What does this mean to the government and what does this mean to the people? It means “big” data and even bigger “big” data policies. The bigger the “big” data, the bigger the “big” data policies. So is this a good thing or a bad thing? It seems we are asking ourselves this question all the time. Is encryption good, is encryption bad? Is asking for social media account information at the border good, or is it bad? When we talk about big data and we talk about disaster relief and disaster management, whatever helps disaster relief come sooner and whatever makes disaster management more efficient is good. Whatever hinders evacuations and whatever delays relief is bad. Current big data policies are doing a little bit of both, fortunately, and unfortunately.

Big Data Policies

Strategy 4 in The Federal Big Data Research and Development Strategic Plan states that the goal is to increase the value of data through policies that promote sharing and management of data.[14] However, much of the talk coming out of Washington is about how to minimize the harm caused by big data especially surrounding privacy concerns.[15] Part of the strategy insists that federal agencies that provide research and development funding can assist in increasing the value of data through policies to incentivize big data and data science research communities to provide comprehensive documentation on their analysis workflows and related data.[16] This portion of the strategic plan may be the single most helpful strategy to improve disaster management through use of big data.

By the government incentivizing big data research and development we can better use the information to improve disaster management. Specifically, we can use crowdsourcing of big data to help with emergency evacuations. Crowdsourcing is a process of acquisition, integration, and analysis of big data generated by many different sources such as sensors, devices, vehicles, buildings, and a variety of human functions including social media output.[17] Crowdsourcing has many benefits and can include, just as the word suggests, sourcing the crowd to get input and ideas. Iceland used crowdsourcing in 2013 to gain input on and pass their constitution.[18] The process in the end was a failure, but many lessons were learned and most do not consider it a complete failure and see many useful implications for the future. Aside from the constitution, crowdsourcing is peaking interest with emergency management. Using crowdsourcing as one way to help disaster victims evacuate will be useful so long as policies are not getting in the way. Using policies to incentivize use of and production of big data, including the data mined during crowdsourcing, may also help to offset many of the concerns surrounding privacy and civil rights concerns as long as people believe they are getting a return on their investment and that it isn’t being used against them.

So what are the current policies surrounding big data and how are they helping and/or hindering big data? What are the implications of these policies and their effects on crowdsourcing and the future of crowdsourcing to assist with emergency evacuations?

How Big Data Policies are Helping Disaster Management

The U.S. is actually quite notable for not having adopted comprehensive laws for protection of data when compared to Europe, where most countries have done so.[19] The lack of policy is actually the most helpful thing at the moment. It gives government agencies, state/local, and emergency response agencies more freedom to use openly available information. For example, the freedom to use social media to understand locations of communities trapped for instance after a storm. Driving concerns with privacy and civil rights violations will eventually demand stricter policy and the collection and use of big data. We are at an important crossroads for big data and need to be careful about which policies are implemented so as to not disrupt the helpful use of big data in emergency and disaster management.

The biggest help offered for big data development actually lies in the research and education on the issues. For example, during the first week of 2017, renowned economic expert Donna Ginther, professor of economics and director of the KU Center for Science, Technology & Economic Policy, will address a presidential panel on the issues of big data.[20] The panel was set to discuss ways to increase the use and availability of big data to build policy and influence program design.[21] As part of her research, she advocated for a web-based infrastructure for information sharing so that government and policymakers could better understand decisions that pertain to issues surrounding economic growth, technological advancement and other societal issues.[22] This type of education and discussion with senior leaders and decision makes is what is going to bring big data and its policies into the future and on the right path.

How Big Data Policies are Hindering Disaster Management

Perhaps the least helpful aspects affecting big data emergency management are the privacy policies adopted by most social media platforms. There is general consensus that people are not getting a fair trade-off as far as the data they provide and the return on their data investment. The social media platform seems to gain more than the user. By creating crowdsourcing for emergency management and disaster relief, we’d reverse that sense of getting shafted. We’d finally get something in return for the data we provide to those companies. But policies would have to be set in place to mandate or give incentives to companies to allow for government use of this data. Right now, the general policy of most social media platforms is not to share this information. If the public could be convinced of a return on investment, such as improved emergency evacuations, emergency notifications, relief supplies etc. the discussion would change surrounding privacy rights. If we were to open the issue via crowdsourcing we could further this development.

Conclusions

There will always be concerns over privacy. However, we start to see less discussion of the issue when the benefits outweigh the concern. Developing policies over the next decade will determine the future of big data. These policies will have grave consequences for threat management, emergency management, and disaster relief. We need to make sure that whatever policies take place that the benefits for the people outweigh their concerns over privacy and at the same time make sure we are preserving privacy as much as one can as our world becomes more complex.

Bibliography

Diepenbrock, George. 2017. Economist to Testify before Presidential Commission on ‘Big Data’ Policy Decisions. The University of Kentucky website. http://news.ku.edu/2017/01/03/economist-testify-presidential-commission-big-data-policy-decisions

Greenleaf, Graham. 2012. Global Data Privacy Laws: 89 Countries, and Accelerating. Social Science Electronic Publishing, Inc. Retrieved 16 February 2014. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2000034

NITRAD. 2016. The Federal Big Data Research and Development Strategic Plan. The Networking and Information Technology Research and Development Program. https://www.whitehouse.gov/sites/default/files/microsites/ostp/NSTC/bigdatardstrategicplan-nitrd_final-051916.pdf

Slate News. 2014. Five Lessons from Iceland’s Failed Crowdsourced Constitution. http://www.slate.com/articles/technology/future_tense/2014/07/five_lessons_from_iceland_s_failed_crowdsourced_constitution_experiment.html

Xu, Zheng, Yunhuai Liu, Neil Yen, Lin Mei, Xiangfeng Luo, Xiao Wei, and Chuanping Hu. 2016. Crowdsourcing based description of urban emergency events using social media big data. http://ieeexplore.ieee.org/document/7381652/

Chapter 3

SUPERSTORM SANDY: HOW NON-GOVERNMENTAL ORGANIZATIONS USE BIG DATA IN DISASTER RESPONSE AND RELIEF

By Anna Schermerhorn-Collins

Abstract

Disaster management is evolving exponentially through the applications of Big Data. Predictive analytics lead to more accurate forecasting models of impending storms. Crisis mapping provides comprehensive disaster data to the public in real-time. Even relief organizations are using different toolsets provided by Big Data Technology in order to better meet their mission objectives. The response and relief efforts during Superstorm Sandy are a case in point. However, without collaboration and a means of sharing data, as disparate as the relief organizations were, so were the data sets among them.

Introduction

On October 29, 2012, Superstorm Sandy made landfall near Atlantic City, New Jersey. Sandy brought loss of life, property damage and hardship from the Caribbean to New England. By mid-November, 97 deaths were credited to the storm within a 65-mile radius of New York City.[23] Many who chose to remain in their homes were stranded by storm surge. Emergency service and disaster relief personnel were unable to access flooded areas by conventional means, delaying emergency assistance and distribution of supplies. Areas far from the coastal evacuation zones experienced power outages that lasted over a week. Faulty or improper use of generators resulted in deaths due to carbon monoxide asphyxiation. Without power, gas station fuel pumps were inoperable. According to the National Oceanic and Atmospheric Administration (NOAA), there was $68.3 billion dollars in damage as a result of the storm.[24] Four years later, there are effects of Sandy that continue to linger.

Despite the losses, without the use of Big Data technology, the consequences of Superstorm Sandy would have been much worse. By using predictive analytics with Big Data, “the science of hurricane prediction has evolved to the point that landfall timing and location can be more accurately predicted, and subsequently countless lives saved.”[25] The National Hurricane Center (NHC) bases storm predictions on over 30 models.[26] The Federal Emergency Management Agency (FEMA) works closely with NOAA, the U.S. Geological Survey (USGS), and other organizations to create real time mapping and analysis: “During Sandy, FEMA accessed more than 150,000 geo-tagged photos from the Civil Air Patrol, which helped the agency perform assessments and make better decisions.”[27] While terrestrial data mapping is a valuable tool for disaster management, it does not present a comprehensive picture of the need for aid in the aftermath of the storm. The challenge for disaster relief personnel is to incorporate the use of Big Data technology so that governmental and non-governmental organizations are able to collaborate by sharing data from multiple sources, including social media crowdsourcing. In this way, Big Data technology may facilitate better recovery and relief efforts in the aftermath of a major event like Superstorm Sandy.

Non-Governmental Organizations and Data Driven Institutions

Crisis mapping and disaster management go hand in hand. Crisis maps track storms using real time visualizations and data analysis. The technology and data giant, Google, has developed various tools to improve disaster response, including Google Person Finder and Google Public Alerts. In preparation for Sandy’s assault on the east coast, Google launched the comprehensive Superstorm Sandy Crisis Map. For this project, Google partnered with a variety of commercial organizations, governmental, and non-governmental sources to provide up-to-date location tracking, public alerts, radar and cloud imagery, evacuation information, shelter information, and YouTube storm footage.[28] As an interactive map, the Superstorm Sandy Crisis Map allowed the user to toggle through the above-listed data sets in real time, thereby delivering real-time information to the public during critical events.

Mapping technologies are not limited to Google. Big Data analytics and mapping were also utilized by the humanitarian nonprofit organization, Direct Relief, during Superstorm Sandy. Palantir and Esri partnered with Direct Relief to distribute medical resources and supplies to clinics in order to aid people impacted by the storm.[29] Through the use of data-visualization tools, Palantir Technologies conducts analysis and pattern detection by amassing data into a common site.[30] Esri supplies geographic information system (GIS) mapping software which was used to create a Hurricane Sandy swipe map that overlaid imagery from before and after the storm.[31] By integrating various datasets, the use of Big Data technology by Direct Relief created situational awareness to “designate the most at-risk populations in the path of the storm, prioritize populations in the path or the storm, prioritize problem areas, and implement a successful and speedy response.”[32] Palantir’s software enabled Direct Relief officials to recognize where bottlenecks in medical supply were likely to occur and make appropriate decisions about where to deploy resources and equipment.[33]

Storm and crisis data were also gathered by crowdsourcing from social media feeds including Twitter and Instagram. There were over 20 million tweets generated and 1.3 million Instagram pics posted during Superstorm Sandy.[34][35] Mining such vast amounts of data requires technology. The operational intelligence software provider, Splunk, provides one example of data mining technology. In the case of Sandy, the charitable segment of Splunk, Splunk4Good, partnered with the nonprofit Geeks without Bounds in analyzing Instagram and hashtag postings in real-time to identify problem areas where resources were likely to run out.

Team members working on the project looked at hashtags and words in Twitter feeds as well as Instagram photos related to Sandy, evacuation rates in specific areas and other keywords about resources, such as power, food, fuel and water. Using that data, the team plotted out locations where supplies might be most needed and got a finger on the pulse of a community’s sentiment about available resources.[36]

Not without challenges, social media as a data source does have limitations. The scope and reliability of the data may be impacted due to privacy controls set by the user. Power outages limit the availability of user platforms and data collection. Geo-tags are not always accurate. Data may be invalidated by redundancies and falsified reports.[37] However, the immediacy provided by social media crowdsourcing is a valuable asset that should be harnessed to assist emergency responders and relief workers in identifying and mitigating urgent situations during disasters.

Humanitarian organizations like Direct Relief and Geeks without Bounds were not the only agents to utilize mapping and data aggregation during Superstorm Sandy. Crowdsourced data aggregation was also being performed at the community level. With the availability of fuel severely limited in the aftermath of Sandy, a group of students from Franklin High School in New Jersey created a crowdsourced map that identified the status of gas stations in the New York/New Jersey area.[38] As the open or closed status of gas stations was ever changing, immediate updates to the site were made via email and twitter. A more recent local data project is the app Mind My Business, a product of Vizalytics Technology. Mind My Business provides real time information such as construction and street closures in an effort to aid local business decision making: “Government officials and residents just didn’t have the information they needed about where the damage was the greatest and how long it would take to fix it.”[39] Founded by a Staten Island, New York resident whose neighborhood was impacted by Superstorm Sandy, Mind My Business provides information affecting the small business owner that residents and government officials may not otherwise have.[40] Ultimately, through the use of crowdsourced data, Sandy relief groups and community organizers were able to mobilize and provide aid to residents in need, often before the arrival of formal organizations such as FEMA or the Red Cross.[41]

Occupy Sandy and the NYC Tech Community

The Occupy movement, famously known for camping out in Zuccotti Park in New York City in the fall of 2011, found a new mission in the aftermath of Superstorm Sandy. Occupy Sandy, an offshoot of Occupy Wall Street, was formed as a disaster relief and mutual aid effort after Sandy made landfall in the New York City area: “While federal mobilization efforts can often take weeks –sometimes months– to reach citizens, Occupy was one of the only local groups capable of quickly mobilizing to help victims.”[42] Occupy volunteers promptly established volunteer hubs and distribution centers. Offers of volunteerism were connected to need by the use of the relief site Recovers.org.[43] A technology team managed efforts using google documents, Facebook notes, and the use of social media: “While the local tech team sleeps, a shadow corps in London works off-hours to update the Twitter feed and maintain the internet. Some enterprising Occupiers have even set up a wedding registry on Amazon.com with a wish list of necessities for victims of the storm.”[44] Not bound by bureaucracy, Occupy Sandy volunteers canvassed door-to-door for data when formal organizations, including FEMA and the Red Cross, could not. In fact, the visible lack of formal relief organizations was a common complaint among volunteers and affected residents.[45]

As visible and action-oriented as the Occupy Sandy effort was, Occupy’s data collection was inconsistent and often full of holes. As explained by activist Max Liboiron: “A lack of electricity, wet field conditions, inaccessible neighborhoods, and ever-shifting populations of residents and volunteers were only some of the problems faced by post-Sandy canvassers in both grassroots and government efforts.”[46] Without electricity or internet in locations hardest hit by Sandy, the preferred method of data collection by Occupy volunteers was pen and paper. Survey questions focused on immediate needs including food, housing, and medical necessities. In effect, Occupy Sandy differed from many government models by collecting crisis data as opposed to storm data: “Occupy Sandy’s data collection formalized interaction patterns that aimed to open up possible futures rather than foreclose upon them, to define disaster in terms of ongoing struggles rather than merely as an extreme weather event causing local damage.”[47] However, as Occupy Sandy volunteers were on the ground triaging the distribution of resources, they failed to aggregate data or produce data sets despite data workshops and hackathons planned within the NYC tech community.[48] The Occupy effort was a case of people before data. The relation of data clusters to each other was lacking, and there was little interoperability between the data sites of various relief efforts, including Occupy Sandy, FEMA, and the Red Cross. Disparate relief organizations employed technology and data sources independently of each other in order to meet their individual organizational goals .

Supplemental Summary of Comparative Aspects of Data Driven Institutions used in Relief Efforts during Superstorm Sandy*

Relief Organization

Technology

Data Projects/

applications

Data sources

Google

Superstorm Sandy Crisis Map: Real time information on storm surges, power outages, shelters, evacuation routes, etc.[49]

National Hurricane Center, Weather.com, USGS, US Naval Research Laboratory, live cameras, YouTube videos, etc.[50]

Direct Relief

Palantir Technologies,

Esri

Deployment of medical equipment, health resources, food and shelter

Data-integration

Visualization

Analysis

Geek without Bounds,

NY Tech Community, etc.

Splunk, etc.

Aid distribution

Fuel locations

Crowd Sourcing:

Twitter

Instagram

Occupy Sandy

Recovers.org

Google Docs

Facebook notes

Pen and paper

Connect offers of help with need for donations of food and supplies

Door to door canvassing

*this list is not exhaustive

Recommendations

Big Data technology can positively impact future disaster management by providing a platform that would facilitate sharing between government and non-governmental relief organizations so that data collection may be validated and aggregated in a central location. The use of Big Data during Superstorm Sandy was significantly improved compared to prior disasters: “While Google Earth had just been released when Hurricane Katrina struck the Gulf Coast in 2005, and the Haitian earthquake represented something of a test case for technology-based disaster response at a distance, the nearly 20 million tweets about Hurricane Sandy (Twitter, n.d.) provide a sufficiently robust source of data to map the data shadows of the storm.”[51] Following Superstorm Sandy, Big Data applications for disaster response continue to grow. Referred to as ‘crisis hubs,’ Facebook is in the process of developing a repository for disaster information that plans to amalgamate Facebook’s Safety Check, Facebook Live, physical response, and news and chatter from the scene into one space.[52] Of course, relying on a crowdsourced data hub for disaster management, such as proposed by Facebook, is problematic if the power goes out and the internet fails. There will either be incomplete data or there will be no data. Internet failure complicated the collection of crowdsourced information in many areas left without electricity by Superstorm Sandy.

Regardless, the establishment of a common reservoir for data is a big move forward for disaster response. Liboiron addressed the interest in establishing a collaborative platform in the aftermath of Sandy:

“In the months after the storm while triage was still in full swing, representatives from both grassroots and official agencies began discussing ways to formalize collaborative relationships through reliable data infrastructures to obtain a clearer, bigger, shared picture of needs on the ground. Grassroots groups and survivors speak of a “common core data set,” a large-scale, open, agreed-upon set or survey questions that would be asked regardless of the group conducting the canvassing.”[53]

Expanding on Liboiron’s findings, a common core data set should not be limited to survey questions. Crowdsourced data, when aggregated appropriately, provides a picture of real-time needs on the ground. The problem is that social media, rich with crowdsourced data, does not currently have a mechanism to coordinate information sharing and resource use between independent relief organizations.[54] Through the use of Big Data technology, which can collect and analyze crowdsourced and canvassed data, disparate groups like Direct Relief, Occupy Sandy, the Red Cross, and FEMA and others will be able to coordinate so that aid and resource distribution are well-apportioned and expeditiously deployed in future relief efforts.

BIBLIOGRAPHY

#OCCUPYDATA NYC. Accessed December 19, 2016, http://occupydatanyc.org/

Cowan, A.L., J. Goldstein, J. D. Goodman, J. Keller, and D. Silva, “Hurricane Sandy’s deadly toll.” The New York Times, November 17, 2012, accessed December 19, 2016. www.nytimes.com/2012/11/18/nyregion/hurricane-sandys-deadly-toll.html

Direct Relief. “Hurricane Sandy Relief & Recovery,” accessed December 6, 2016. https://www.directrelief.org/emergency/hurricane-sandy-relief-and-recovery/

Esri. “Hurricane Sandy: TheAfterMap,” accessed January 3, 2017. http://www.esri.com/services/disaster-response/hurricanes

Feuer, Alan. “Occupy Sandy: A movement moves to relief.” The New York Times, November 11, 2012, accessed November 17, 2016, http://www.nytimes.com/2012/11/11/nyregion/where-fema-fell-short-occupy-sandy-was-there.html

FYI Solutions. “Hurricane Predictions and Big Data,” November 8, 2012, accessed December 6, 2016. http://www.fyisolutions.com/blog/hurricane-predictions-and-big-data/

Gao, Huiji, Geoffrey Barbier, Rebecca Goolsby, and Daniel Zeng, Harnessing the crowdsourcing power of social media for disaster relief. Arizona State Univ Tempe, 2011.

Harris, Derrick. “As Sandy strikes, another big data opportunity emerges,” Gigaom, October 30, 2012, accessed December 31, 2016, https://gigaom.com/2012/10/30/as-sandy-strikes-another-big-data-opportunity-emerges/

Heaton, Brian. “How emergency managers can benefit from big data.” Emergency Management (2013).

Horowitz, Brian T. “Big Data Analytics, HIE Could Aid Hurricane Sandy Recovery Efforts,” eWeek, December 30, 2012, accessed December 31, 2016. http://www.eweek.com/enterprise-apps/big-data-analytics-hie-could-aid-hurricane-sandy-recovery-efforts

Kilkenny, Allison. “Occupy Sandy Efforts Highlight Need for Solidarity, Not Charity” The Nation, November 5, 2012, accessed December 6, 2016. https://www.thenation.com/article/occupy-sandy-efforts-highlight-need-solidarity-not-charity/

Liboiron, Max. “Data activism: Occupy Sandy’s canvassing practices after Hurricane Sandy,” Superstorm Research Lab, August 11, 2014, accessed December 6, 2016, https://superstormresearchlab.org/2014/08/11/data-activism-occupy-sandys-canvassing-practices-after-hurricane-sandy/

Liboiron, Max. “Disaster Data, Data Activism,” in Extreme Weather and Global Media, ed. Julia Leyda and Diane Negra (Routledge), 2015: 155. https://maxliboiron.files.wordpress.com/2013/08/liboiron-disaster-data-data-activism_2015.pdf

Meier, Patrick. “Digital Humanitarians, Big Data and Disaster Response,” February 19, 2015, accessed January 3, 2017, https://www.brookings.edu/blog/techtank/2015/02/19/digital-humanitarians-big-data-and-disaster-response/

Metz, Cade. “How Facebook is Transforming Disaster Response,” November 10, 2016, accessed December 17, 2016, https://www.wired.com/2016/11/facebook-disaster-response/

NOAA. “National Centers for Environmental Information,” accessed January 3, 2017. https://www.ncdc.noaa.gov/billions/events

Recovers.org. Accessed 12/20/2016, https://recovers.org/benefits/residents

Schroder, Stan. “Google Launches Crisis Map for Hurricane Sandy,” Mashable, October 29, 2012. accessed December 28, 2016, http://mashable.com/2012/10/29/google-crisis-map-hurricane-sandy/#TPC3q119Yaqb

Shelton, Taylor, Ate Poorthuis, Mark Graham, and Matthew Zook, “Mapping the data shadows of Hurricane Sandy: Uncovering the sociospatial dimensions of ‘big data’,” Geoforum 52 (2014).

Stengel, Geri. “Big Data makes it easy to think globally and act locally,” Forbes, May 25, 2016, accessed December 6, 2016, http://www.forbes.com/sites/geristengel/2016/05/25/big-data-makes-it-easy-to-think-globally-and-act-locally/

Taylor, Chris. “Sandy Really Was Instagram’s Moment: 1.3 Million Pics Posted,” Mashable, November 5, 2012, accessed January 3, 2017, http://mashable.com/2012/11/05/sandy-instagram-record/#bNCMnSR_i8qw

Team Praescient. “Big Data, Technology, and Hurricane Sandy,” November 7, 2012, accessed December 6, 2016, https://praescientanalytics.com/hurricane-sandy/

Zhao, Emmeline. “Hurricane Sandy Gas Station Crisis sees solution from New Jersey High School Students,” The Huffington Post, November 2, 2012, accessed December 8, 2016, http://www.huffingtonpost.com/2012/11/01/hurricane-sandy-gas_n_2061305.html

Chapter 4

LEGAL, REGULATORY AND POLICY FRAMEWORKS IN BIG DATA

By Hoaithi Y.T. Nguyen

In previous chapters, we discussed how big data analytics, in its many facets, are used in emergency response and crisis management. In this chapter, we will discuss some of the legal and ethical considerations of the government’s use of big data; how the Whitehouse and Congress aimed to address those considerations through each of their proposed legislations. Lastly, because there is no one uniform standard of data sharing within the federal government or with state and local governments, we will present FEMA’s data sharing policy as a guideline for our policy recommendations in the following chapters.

Legal and Ethical Considerations

Today, government entities and corporations can gather huge data sets from many different sources. They use new computing tools to generate new information about individuals by making predictions and drawing inferences about their behaviors from these huge data sets. In the area of disaster recovery and relief, emergency responders use big data to coordinate efforts quickly, correctly and efficiently. In the massive mudslide disaster in Oso, Washington, data gathered from a drone was used to create a 3-D model of the area so emergency responders could plan their rescue in a way that was safe for the rescuers and the residents being evacuated.[55] Applications designed by Facebook and Google tracking missing persons during the Nepal earthquake helped keep family and friends informed about their loved-ones’ status and also allowed responders and rescuers to identify their locations.[56] In Hurricane Sandy, emergency management teams used big data analytics to prioritize and coordinate their efforts in areas that were hardest hit, areas that lost power and to address the health needs of each community.[57]

However, big data can also be a double-edged sword. Any one government agency, or private corporation collecting so much information about the public makes them more vulnerable as a target of data breaches leading to serious privacy violations. The potential for abuse and misuse of big data is also a concern as society’s prejudices and bias are easily programmable into algorithms, whether intentional or inadvertent, leading to potential violations of civil rights and unfair competition.[58] “Digital redlining”, a practice whereby online retailers offered different discounts on merchandise based on where the customer was located and their social media profile, has been reported.[59] The recent upheaval due to the government’s over reach in surreptitiously collecting data on citizens contributes to concerns over the issue of trust in government’s use of big data. Indeed, in addition to the Snowden revelations, an example of the government’s misuse of data was during the Second World War when Japanese Americans were identified by census data collected under strict guarantees of confidentiality.[60]

Yet, in the United States, there is currently no legislation that specifically regulates the collection, or the use of big data. The only requirement is that companies and government agencies comply with the privacy laws that are applicable to the data they are using, such as the Health Insurance Portability and Accountability Act (HIPAA) for health related data, or the Gramm-Leach Bliley Act (GLBA) for financial information.[61] They are also required to comply with various state laws in the states where they operate, their own privacy policies and whatever contractual obligations they have with their user base. Undoubtedly, this unnecessarily complex and confusing patchwork of standards calls for a unifying regulatory framework at the federal level.

Proposed Legal Frameworks for the Use of Big Data

In May 2014, the Whitehouse released a report that reviewed the impact that big data has and will have on a range of economic, social and governmental activities. It also focused on the federal government’s role on ensuring that our laws evolve in such a way as to protect our values in the face of the rapidly developing big data technologies. To that end, the report made the following six recommendations:

1. Advance the Consumer Privacy Bill of Rights by empowering the Department of Commerce to take public comments on possible changes to the Consumer Privacy Bill of Rights that was first proposed by President Obama in 2012. Thereafter, the Department should proceed to prepare draft legislation for consideration by the President and Congress.[62]

2. Because the potential impacts of data breaches are more serious due to the sheer volume and intimate insights into a person’s character, Congress should pass legislation to provide for a single national data breach standard instead of a patchwork of state laws regulating how a data breach must be reported.[63]

3. The Office of Management and Budget (OMB) should work to extend privacy protections to non-U.S. persons because privacy is a worldwide value. If it is not practicable to apply the Privacy Act of 1974 to non-U.S. persons, the OMB should establish alternative privacy policies that apply appropriate and meaningful protections to personal information regardless of a person’s nationality.[64]

4. The government must protect students against their data being shared for profit or inappropriately, by ensuring that data collected on students in schools is used for educational purposes only.[65]

5. The federal government’s civil rights and consumer protection agencies should expand their technical expertise to stop discrimination practices such as digital redlining. They should also develop a plan for investigating and prosecuting violations of laws as the result of algorithm-driven decision making that lead to discriminatory impact on protected classes. [66]

6. Congress should amend the Electronic Communications Privacy Act. Originally passed in 1986, before email, the Internet and cloud computing became ubiquitous, the ECPA’s distinctions between different types of data as well as how data was stored are now outdated.[67]

To the report’s first recommendation, the Whitehouse released its proposed Consumer Privacy Bill of Rights Act of 2015 to “provide consumers with clear rights to exercise individual control over data.”[68] Among other things, the Act sought to establish principles that consumers should receive clear, up-front and plain language notices of how their information will be collected, used and shared; able to see and correct data held by a company, similar to the requirements of the credit report requirements; and able to cancel their accounts and have the opportunity to remove their data if they so choose.[69] However, the proposed legislation stopped short of making the Federal Trade Commission (FTC) the power to set regulations to enforce these principles. Instead the FTC is only empowered to sign off on rules that companies would set for themselves.

Unsurprisingly, privacy advocates argue that the Act does not go far enough in protecting consumers. “The legislation creates a huge loophole that practically eviscerates any real privacy protection and consumer control of their data,” stated the Center for Digital Democracy. They also point out that the Act would overturn state laws that offer stronger protections to consumers, effectively curbing existing consumers’ privacy rights.[70] While the bashing might be superfluous since the proposal had little chance of becoming law as is, it did jump start Congress to act. By May of 2015, a group of high profile Democratic senators, led by Senator Patrick Leahy of Vermont, introduced the Consumer Privacy Protection Act. This proposed legislation established federal standards for notifications when consumer data is lost or stolen; kept state privacy laws in force; and expanded the definition of private information. Again, while consumers groups supported the proposal, it also had little chance of passing since not a single Republican senator co-sponsored the bill.[71]

Beyond the specifics of the two proposed legislations, some other critics argue that any laws and regulations focusing on limiting the collection and controlling the retention of data is outdated and misguided. Instead, the new regulatory framework should now focus on controlling data at the point it is used by requiring that all personal data be annotated at its point of origin, then placed within a metadata “wrapper” describing its content. Corporations or government agencies need to seek approval from regulators who would impose mandatory auditing requirements and penalties on those who misuse the data. The penalties need to be sufficiently higher than the cost of doing business to serve as deterrents for privacy violations.[72] Still, there are others who suggest that private contracts between companies collecting the data and the end users are the best ways of ensuring accountability. Compared to the rate of technological growth, government is too slow and regulations will be outdated as soon as they are enacted.[73]

There is more than a little grain of truth in the preceding statement. As of this writing, there is no fruitful movement on the two proposed legislations. With a new presidential administration due to take office in a matter of weeks, it is safe to assume that there will be no breaking news on this issue in the immediate future.

Data Sharing in the Area of Natural Disasters

With no comprehensive standard governing the use and sharing of data currently or in the near future, it is necessary to give due weight and consideration to management standards and policy promulgated by the agencies themselves.

The Federal Emergency Management Agency (FEMA) will share recovery data (FEMA-collected disaster assistance data) to trusted partners as authorized by the Privacy Act and FEMA’s policy. Trusted partners are broken down to different groups that include (A) Other federal government agencies; (B) State and tribal governments; © Local governments and voluntary organizations; (D) Utilities companies, hospitals and health care providers; (E) Voluntary organizations able to provide medical devices or assistive technology who have prior relationships with FEMA; (F) Other entities able to provide medical devices or assistive technology who do not have a prior relationship with FEMA; and (G) Private businesses that employ disaster survivors.[74]

According to the policy, FEMA limits its data sharing to the general public to Non-Personally Identifiable Information (PII) and non-Sensitive Personally Identifiable Information. As a part of the OpenFEMA initiative, for all natural disasters that the President authorizes Individual assistance for, FEMA releases an aggregate of non-PII and non-SPII data in areas of housing assistance, inspections and management, and registration intake and helplines.

The policy permits FEMA to share PII and SPII with trusted partners. However, the limitations of the data share are based on the group to which the trusted partner belongs. For example, as trusted partners in Group A, other federal agencies may have access to all FEMA Recovery Data in active as well as historic disasters. This data includes registration records, eligibility determinations, correspondence, survey responses and pre-registration information of disasters survivors anywhere in the United States.[75] State and tribal governments (Group B) only have access to the same data in their state or tribal area.[76] Local governments and voluntary organizations do not have access to the same data that federal, state and tribal governments do. The types of data that are available to Group C are such as name; contact information; inspected loss amount; amounts received; award category; Small Business Loan Administration (SBA) loan status; and initial pre-registration information.[77]

Conclusion

While the focus of this white paper is the government’s use of big data to improve its natural disasters emergency response services, a wide range of other government agencies conducting other important work of governance and services also encounter the same legal and ethical issues discussed in this chapter. FEMA’s data sharing policy seems to strike the right balance among the competing interests of protecting individual’s privacy and advancing organizations’ goal of using big data to improve service delivery. FEMA’s policy recognizes that the notice and consent framework no longer adequately addresses privacy concerns because realistically no one reads the pages and pages of fine print that are typical of these notices. Thus, the context of data use must be taken into consideration when formulating a new regulatory scheme for the collection, using and sharing of big data. Additionally, FEMA’s policy also considers who are the organizations (trusted partners) that request the data. There is an individualized and measured approach in FEMA’s policy that should serve as the model for other agencies as we all wait for a uniform regulatory standard promulgated by either Congress or the FTC.

BIBLIOGRAPHY

Bollier, David, and Charles M. Firestone. The Promise and Peril of Big Data. Aspen Institute, Communications and Society Program Washington, DC, 2010. http://23.66.85.199/collateral/analyst-reports/10334-ar-promise-peril-of-big-data.pdf.

Buckley, Jonathan. “Big Data’s Role in Crisis Response and Recovery.” Dataflog, October 14, 2016. https://datafloq.com/read/Big-Data-Role-Crisis-Response-Recovery/1581.

Executive Office of the President. “Big Data: Seizing Opportunities; Preserving Our Values,” May 2014. https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_5.1.14_final_print.pdf.

“FEMA Recovery Policy 9420.1, Secure Data Sharing — Recovery Policy Sharing Survivor Data with Trusted Partners,” September 9, 2013. https://www.fema.gov/media-library-data/1407352097991-13785338d2ae0606987b2259cab33fa0/Recovery%20Policy%20Sharing%20Survivor%20Data%20with%20Trusted%20Partners%20090913.pdf.

Klosek, Jacqueline. “Regulation of Big Data in the United States.” Taylor Wessing. Global Data Hub, July 2014. https://united-kingdom.taylorwessing.com/globaldatahub/article_big_data_us_regs.html.

Mundie, Craig. “Privacy Pragmatism.” Foreign Affairs. Accessed December 29, 2016. https://www.foreignaffairs.com/articles/2014-02-12/privacy-pragmatism.

Podesta, John. “Findings of the Big Data and Privacy Working Group Review | Whitehouse.gov.” White House. Accessed December 27, 2016. https://www.whitehouse.gov/blog/2014/05/01/findings-big-data-and-privacy-working-group-review.

Sasso, Brendan. “Obama’s ‘Privacy Bill of Rights’ Gets Bashed from All Sides.” The Atlantic, February 27, 2015. http://www.theatlantic.com/politics/archive/2015/02/obamas-privacy-bill-of-rights-gets-bashed-from-all-sides/456576/.

Singer, Natasha. “White House Proposes Broad Consumer Data Privacy Bill.” The New York Times, February 27, 2015. http://www.nytimes.com/2015/02/28/business/white-house-proposes-broad-consumer-data-privacy-bill.html?_r=0.

Sullivan, Bob. “Will the New Consumer Privacy Bill Protect You?” Money, May 1, 2015. http://time.com/money/3843230/consumer-privacy-protection-act-bill-identity-credit/.

Taglang, Kevin. “What Do the Consumer Privacy Bill of Rights and Net Neutrality Have in Common?” Benton Foundation, March 6, 2015. https://www.benton.org/blog/privacy-and-title-ii.

Valentino-DeVries, Jennifer, and Jeremy Singer-Vine. “Websites Vary Prices, Deals Based on Users’ Information.” Wall Street Journal, December 24, 2012. http://www.wsj.com/news/articles/SB10001424127887323777204578189391813881534.

Chapter 5

COORDINATING CROWDSOURCING WITH GOVERNMENT BIG DATA TO IMPROVE DISASTER RESPONSE-STRATEGY AND AGENCY CONSIDERATIONS

By Anna Brookes

“Social computing,” is defined as the “use of technology in networked communication systems by communities of people for one or more goal.”[78] The concept has been identified as an area ripe for further attention and investment by the Obama administration, especially for the potential to mobilize citizens to address national priorities in health, public safety and science.[79]

One of these areas is disaster response. Disaster response is considered a societal Grand Challenge by the President’s Council of Advisors on Science and Technology (PCAST)[80]. “Big Data can help in all four phases of disaster management: prevention, preparedness, response, and recovery.[81] But, history has shown that advances in wireless networks, unmanned systems, embedded sensors, pattern recognition, surface reconstruction, data fusion, and scheduling algorithms have not necessarily resulted in usable information or better decisions, rather they often have created an unmanageable data avalanche.”[82]

Historically disaster response from the federal level is overseen by the Federal Emergency Management Ageny (FEMA), part of the Department of Homeland Security(DHS). Under the Robert Stafford Act, FEMA, through a Presidential Declaration of Emergency provides coordination of emergency funding and physical support through activation of agencies with disaster programs such as SBA or specific programs like the US Department of Housing and Urban Development (HUD) that can assist. FEMA is frequently criticized for a slow response time, leaving disaster survivors in danger or without critical resources, especially in the case of major disasters like Hurricane Katrina. The traditional view of FEMA is of a federal agency hampered by “red tape.” A challenge, then, is to create an agile, real-time information framework to enable quicker response to and recovery from disasters in the United States. Cue crowdsourcing on social media…..

We chose to look at the success of social media in disaster response through social computing and explore adaptation of crowdsourcing for use in “pop-up” applications for individual disasters. Coordinating data from sources such as mapping, census figures, weather, sensing instruments and other sources with posts, photos and entries on such sites as Twitter and Facebook enables confirmation of the disaster damage, exposes pockets of destruction that may have gone unnoticed and prevents duplication of effort by government and NGO relief agencies and responders. Combining this information with verified big data information maintained by government agencies can provide the accuracy and confirmation missing in social media. By providing a basic framework to coordinate the response effort and protect privacy and data concerns, the government can partner with capable crowds. Crowdsourcing allows these groups “to participate in various tasks, from simply “validating” a piece of information or photograph as worthwhile to complicated editing and management, such as those found in virtual communities.”[83]

The concept of using crowdsourcing to assist in disaster relief is not new. Crisis mapping-the use of social media combined with mapping programs to draw a picture of the damage caused by a disaster has been used internationally in the 2010 earthquake and 2016 hurricane in Haiti. In the United States, crowdsourcing was leveraged to provide more timely assistance to survivors of Superstorm Sandy in 2012. Anna Schermerhorn-Collins provides details of the Superstorm Sandy response in a chapter discussing the successes and challenges faced by the use of social media in disaster relief.

Using crowdsourcing for disaster relief has several advantages. First, crowdsourcing enables resiliency, one of the fundamental strategies of homeland security as promulgated by DHS.[84] In practice, DHS promotes three resiliency concepts: adapting to changing conditions, withstanding disruptions and ensuring rapid recovery.[85] The American Red Cross’ definition of resilience reflects the importance of community interaction in preparation, prevention, response and recovery. The ARC is a congressionally mandated NGO when it comes to disaster response and has an extensive network of chapters that train volunteers that prevent, prepare for, and respond to disasters throughout the United States. The ARC defines a resilient community as “…one that possesses the physical, psychological, social and economic capacity to withstand, quickly adapt and successfully recover from a disaster.”[86] Citizens, both survivors and those unaffected by the disaster, have always been an integral part of the response to disasters. Crowdsourcing expands this pool to include those that may be physically far from the disaster location. Disaster survivors recover more quickly if they are resilient. This whole community is especially applicable to our project. Crowdsourcing is a function of a social media community- a community that can be global.

Second, crowdsourced data is collected and processed almost immediately after an event. This almost real time information can provide important parameters and information to tailor disaster response.[87] This immediacy far outdistances any information collection the government can provide.

Third, crowdsourcing tools can collect unstructured data and, through rudimentary analysis, partition the information into bins and prioritize information as it is received.[88]

Finally, crowdsourced data when combined with geo-tag information can further help relief agencies and responders target their efforts more effectively. Typically, this is done through crisis mapping.[89]

The challenges that crowdsourcing disaster relief faces centers on security and coordination or collaboration between relief agencies, programs and the government.[90] While security will always present a problem, Mike Davis, in his chapter outlines strategies that can lessen the risk. The collaboration challenge is one that we feel our program can best address, while still maintaining the spontaneity of the social media community. We hesitate to use terms like “harness” and “capture” because the words sound ominous when it comes to government involvement in any project. Providing coordination of crowdsourced information specific to individual disasters in a temporary platform designed to be taken down when the initial recovery phase has ended, allows for the freedom and impulsive nature of the format to be maintained. Coordination can include further organization of the crowdsourced data and comparison with government big data for confirmation of locations or other identifying information. What it will not involve is the random collection of information for storage. Enabling a “pop-up” type of collection and coordination of data, rather than a long-term program helps to ensure that the government is acting in the best interests of the survivor’s privacy, based on disaster-related needs.

The United States is unique in the kinds of data collected and the relative freedoms associated with expression and lack of privacy. Deanna Kralick notes in her chapter on existing policy that the United States “…is actually quite notable for not having adopted comprehensive laws for protection of data when compared to Europe, where most countries have done so.[91] This gives a program like this a distinct advantage….as long as the information is not abused or used for other purposes.

The Department of Homeland Security is an umbrella agency encompassing vast spectrums of data-some sensitive, some not. FEMA, which comes under the DHS umbrella is responsible for coordinating disaster response and has established the following principles as fundamental doctrine for the response mission area: engaged partnership; tiered response; scalable, flexible, and adaptable operational capabilities; unity of effort through unified command; and readiness to act.[92] Crowdsourcing information fits nicely with this mandate.

Some of the agencies, in addition to FEMA that actively participate in disaster response and have access to verifying information include the US Department of Housing and Urban Development, the US Department of Commerce, the Small Business Administration and Department of Defense. Under President Obama, various technology initiatives have created the National Information and Technology Research and Development program. While not directly involved in disaster, agencies like this should be involved in the development of this platform and management of the cybersecurity measures and policy that govern it.

Crowdsourcing disaster information through a “pop-up” platform that coordinates response and organizes information is but another tool for disaster response and recovery. Our program does not supplant those already in place. Rather, it provides additional information, coordinated to provide timely information for efficient and effective response and recovery.

BIBLIOGRAPHY

American Red Cross. Community Resilience Strategy. http://www.redcross.org/images/MEDIA_CustomProductCatalog/m33840096_1-CommunityResilienceStrategy_Jacquie_Yannacci.pdf

Department of Homeland Security, “Our Mission”, https://www.dhs.gov/our-mission.

_________ “Resilience”, https://www.dhs.gov/topic/resilience.

Huiji Gao, Jeffrey Barber and Rebecca Goolsby. “Harnessing the Crowdsourcing Power of Social Media for Disaster Relief.” IEEE Intelligent Systems, May/June 2011. 11.

Mener, Andrew S., “Disaster Response in the United States of America: An Analysis of the Bureaucratic and Political History of a Failing System” 10 May 2007. CUREJ: College Undergraduate Research Electronic Journal, University of Pennsylvania, http://repository.upenn.edu/curej/63

National Science Foundation and japan Science and Technology Agency, “Big Data and Disaster Management A Report from the JST/NSF Joint Workshop,” Arlington, VA, May, 2013.

NITRAD. 2016. The Federal Big Data Research and Development Strategic Plan. The Networking and Information Technology Research and Development Program. https://www.whitehouse.gov/sites/default/files/microsites/ostp/NSTC/bigdatardstrategicplan-nitrd_final-051916.pdf

President’s Council of Advisors on Science and Technology, NlT for Resilient Physical Systems, 2007, President’s Council of Advisors on Science and Technology, Executive Office of the President.

Chapter 6

AN OVERVIEW OF CYBERSECURITY CHALLENGES

By Michael Davis

Databases, whether controlled by the government or private entities, are a large target for hackers. The recent example of the Office of Personnel Management’s database hack in which millions of security clearance files were exfiltrated by persons associated with the Chinese government.[93] Currently, there are hearings on Capitol Hill on the role of the Russian government in releasing hacked information during the 2016 Presidential Campaign.[94]

Our team foresees multiple challenges to the information within our database. There are the obvious threats from adversaries such as foreign governments, organized criminal groups, and individual hackers. One example of a cyber-attack that could have a large negative effect on our data is ransomware. The United States Computer Emergency Readiness Team (US-CERT) defines ransomware as malicious software that encrypts or locks data contained within an IT system usually introduced through spearphising emails. This encryption is usually accompanied by a demand for payment using a wire transfer or cryptocurrency such as bitcoin.[95]

An attacker could use an attack such as ransomware to deny access to vital information during a crisis. A hacker could also use a variety of cyber-attacks to change or modify database information resulting in false project analysis. Due to the high threat environment of government systems connected to the int

[1] A loaded statement depending on whether or not you recognize the concept of global warming. As this paper was produced before the inauguration, global warming exists and is trending towards higher sea levels, etc.

[2] Andrew S. Mener, “Disaster Response in the United States of America: An Analysis of the Bureaucratic and Political History of a Failing System” 10 May 2007. CUREJ: College Undergraduate Research Electronic Journal, University of Pennsylvania, http://repository.upenn.edu/curej/63. 5.

[3]Ibid., 1.

[4] Aelius_Stilo@yahoo.com, “Essays on Greek History and Culture and Later Byzantine Empire: Library of Alexandria,” 2016, http://penelope.uchicago.edu/~grout/encyclopaedia_romana/greece/paganism/library.html.

[5] Executive Office of the President, “Big Data: Seizing Opportunities, Preserving Values,” May 2014, 1, https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf.

[6]Ibid., 1.

[7] Executive Office of the President, “Big Data: Seizing Opportunities, Preserving Values,” 11.

[8] Ibid..

[9] Ibid., 12.

[10] Ibid., 33.

[11] Adelaide O’Brien, “The Impact of Big Data on Government,” October 2012, 6, http://www.ironmountain.com/Knowledge-Center/Reference-Library/View-by-Document-Type/White-Papers-Briefs/Sponsored/IDC/The-Impact-of-Big-Data-on-Government.aspx.

[12]Ibid., 7.

[13] Executive Office of the President, “Big Data: Seizing Opportunities, Preserving Values,” 34.

[14] NITRAD. 2016. The Federal Big Data Research and Development Strategic Plan. The Networking and Information Technology Research and Development Program. https://www.whitehouse.gov/sites/default/files/microsites/ostp/NSTC/bigdatardstrategicplan-nitrd_final-051916.pdf

[15] http://www2.datainnovation.org/2014-ostp-big-data-cdi.pdf

[16] NITRAD. 2016. The Federal Big Data Research and Development Strategic Plan. The Networking and Information Technology Research and Development Program. https://www.whitehouse.gov/sites/default/files/microsites/ostp/NSTC/bigdatardstrategicplan-nitrd_final-051916.pdf

[17] Xu, Zheng, Yunhuai Liu, Neil Yen, Lin Mei, Xiangfeng Luo, Xiao Wei, and Chuanping Hu. “Crowdsourcing based description of urban emergency events using social media big data.” (2016)

[18] 2014. Slate News. Five Lessons from Iceland’s Failed Crowdsourced Constitution. http://www.slate.com/articles/technology/future_tense/2014/07/five_lessons_from_iceland_s_failed_crowdsourced_constitution_experiment.html

[19] Greenleaf, Graham. “Global Data Privacy Laws: 89 Countries, and Accelerating”. Social Science Electronic Publishing, Inc. Retrieved 16 February 2014.

[20] Diepenbrock, George. 2017. The University of Kentucky website. http://news.ku.edu/2017/01/03/economist-testify-presidential-commission-big-data-policy-decisions

[21] Ibid.

[22] Ibid.

[23] A. L. Cowan, J. Goldstein, J. D. Goodman, J. Keller, and D. Silva, “Hurricane Sandy’s deadly toll.” The New York Times, November 17, 2012, accessed December 19, 2016. www.nytimes.com/2012/11/18/nyregion/hurricane-sandys-deadly-toll.html

[24] https://www.ncdc.noaa.gov/billions/events, accessed January 3, 2017

[25] Hurricane Predictions and Big Data, November 8, 2012, accessed December 6, 2016. http://www.fyisolutions.com/blog/hurricane-predictions-and-big-data/

[26] ibid

[27] Brian Heaton, “How emergency managers can benefit from big data.” Emergency Management (2013).

[28] Stan Schroder, “Google Launches Crisis Map for Hurricane Sandy,” Mashable, October 29, 2012, accessed December 28, 2016, http://mashable.com/2012/10/29/google-crisis-map-hurricane-sandy/#TPC3q119Yaqb

[29] “Hurricane Sandy Relief & Recovery,” Direct Relief, accessed December 6, 2016, https://www.directrelief.org/emergency/hurricane-sandy-relief-and-recovery/

[30] Brian T. Horowitz, “Big Data Analytics, HIE Could Aid Hurricane Sandy Recovery Efforts,” eWeek, December 30, 2012, accessed December 31, 2016, http://www.eweek.com/enterprise-apps/big-data-analytics-hie-could-aid-hurricane-sandy-recovery-efforts

[31] “Hurricane Sandy: TheAfterMap,” Esri, accessed January 3, 2017, http://www.esri.com/services/disaster-response/hurricanes

[32] “Big Data, Technology, and Hurricane Sandy,” Team Praescient, November 7, 2012, accessed December 6, 2016, https://praescientanalytics.com/hurricane-sandy/

[33] Brian T. Horowitz, “Big Data Analytics, HIE Could Aid Hurricane Sandy Recovery Efforts,” eWeek, December 30, 2012, accessed December 31, 2016, http://www.eweek.com/enterprise-apps/big-data-analytics-hie-could-aid-hurricane-sandy-recovery-efforts

[34] Patrick Meier, “Digital Humanitarians, Big Data and Disaster Response,” February 19, 2015, accessed January 3, 2017, https://www.brookings.edu/blog/techtank/2015/02/19/digital-humanitarians-big-data-and-disaster-response/

[35] Chris Taylor, “Sandy Really Was Instagram’s Moment: 1.3 Million Pics Posted,” Mashable, November 5, 2012, accessed January 3, 2017, http://mashable.com/2012/11/05/sandy-instagram-record/#bNCMnSR_i8qw

[36] Brian Heaton, “How emergency managers can benefit from big data.” Emergency Management (2013).

[37] Huiji Gao, Geoffrey Barbier, Rebecca Goolsby, and Daniel Zeng, Harnessing the crowdsourcing power of social media for disaster relief. Arizona State Univ Tempe, 2011: 11.

[38] Emmeline Zhao, “Hurricane Sandy Gas Station Crisis sees solution from New Jersey High School Students,” The Huffington Post, November 2, 2012, accessed December 8, 2016, http://www.huffingtonpost.com/2012/11/01/hurricane-sandy-gas_n_2061305.html

[39] Geri Stengel, “Big Data makes it easy to think globally and act locally,” Forbes, May 25, 2016, accessed December 6, 2016, http://www.forbes.com/sites/geristengel/2016/05/25/big-data-makes-it-easy-to-think-globally-and-act-locally/

[40] ibid

[41] “Big Data, Technology, and Hurricane Sandy,” Team Praescient, November 7, 2012, accessed December 6, 2016, https://praescientanalytics.com/hurricane-sandy/

[42] Allison Kilkenny, “Occupy Sandy Efforts Highlight Need for Solidarity, Not Charity” The Nation, November 5, 2012, accessed December 6, 2016. https://www.thenation.com/article/occupy-sandy-efforts-highlight-need-solidarity-not-charity/

[43] Recovers.org, accessed 12/20/2016, https://recovers.org/benefits/residents

[44] Alan Feuer, “Occupy Sandy: A movement moves to relief.” The New York Times, November 11, 2012, accessed November 17, 2016, http://www.nytimes.com/2012/11/11/nyregion/where-fema-fell-short-occupy-sandy-was-there.html

[45] Allison Kilkenny, “Occupy Sandy Efforts Highlight Need for Solidarity, Not Charity” The Nation, November 5, 2012, accessed December 6, 2016. https://www.thenation.com/article/occupy-sandy-efforts-highlight-need-solidarity-not-charity/

[46] Max Liboiron, “Disaster Data, Data Activism,” in Extreme Weather and Global Media, ed. Julia Leyda and Diane Negra (Routledge), 2015: 155. https://maxliboiron.files.wordpress.com/2013/08/liboiron-disaster-data-data-activism_2015.pdf

[47] Max Liboiron, “Data activism: Occupy Sandy’s canvassing practices after Hurricane Sandy,” Superstorm Research Lab, August 11, 2014, accessed December 6, 2016, https://superstormresearchlab.org/2014/08/11/data-activism-occupy-sandys-canvassing-practices-after-hurricane-sandy/

[48] #OCCUPYDATA NYC, accessed December 19, 2016, http://occupydatanyc.org/

[49] Derrick Harris, “As Sandy strikes, another big data opportunity emerges,” Gigaom, October 30, 2012, accessed December 31, 2016, https://gigaom.com/2012/10/30/as-sandy-strikes-another-big-data-opportunity-emerges/

[50] ibid

[51] Taylor Shelton, Ate Poorthuis, Mark Graham, and Matthew Zook, “Mapping the data shadows of Hurricane Sandy: Uncovering the sociospatial dimensions of ‘big data’,” Geoforum 52 (2014): 169.

[52] Cade Metz, “How Facebook is Transforming Disaster Response,” November 10, 2016, accessed December 17, 2016, https://www.wired.com/2016/11/facebook-disaster-response/

[53] Max Liboiron, “Disaster Data, Data Activism,” in Extreme Weather and Global Media, ed. Julia Leyda and Diane Negra (Routledge), 2015: 158. https://maxliboiron.files.wordpress.com/2013/08/liboiron-disaster-data-data-activism_2015.pdf

[54] Huiji Gao, Geoffrey Barbier, Rebecca Goolsby, and Daniel Zeng, Harnessing the crowdsourcing power of social media for disaster relief. Arizona State Univ Tempe, 2011: 11.

[55] Jonathan Buckley, “Big Data’s Role in Crisis Response and Recovery,” Dataflog, October 14, 2016, https://datafloq.com/read/Big-Data-Role-Crisis-Response-Recovery/1581.

[56] Ibid.

[57] Ibid.

[58] David Bollier and Charles M. Firestone, The Promise and Peril of Big Data (Aspen Institute, Communications and Society Program Washington, DC, 2010), http://23.66.85.199/collateral/analyst-reports/10334-ar-promise-peril-of-big-data.pdf.

[59] Jennifer Valentino-DeVries and Jeremy Singer-Vine, “Websites Vary Prices, Deals Based on Users’ Information,” Wall Street Journal, December 24, 2012, http://www.wsj.com/news/articles/SB10001424127887323777204578189391813881534.

[60] Executive Office of the President, “Big Data: Seizing Opportunities; Preserving Our Values,” May 2014, p. 22. https://www.whitehouse.gov/sites/default/files/docs/big_data_privacy_report_5.1.14_final_print.pdf.

[61] Jacqueline Klosek, “Regulation of Big Data in the United States,” Taylor Wessing, Global Data Hub, (July 2014), https://united-kingdom.taylorwessing.com/globaldatahub/article_big_data_us_regs.html.

[62] Executive Office of the President, “Big Data: Seizing Opportunities; Preserving Our Values,” p. 60.

[63] Ibid.

[64] Ibid.

[65] Ibid.

[66] Ibid.

[67] John Podesta, “Findings of the Big Data and Privacy Working Group Review | Whitehouse.gov” (White House), accessed December 27, 2016, https://www.whitehouse.gov/blog/2014/05/01/findings-big-data-and-privacy-working-group-review.

[68] Natasha Singer, “White House Proposes Broad Consumer Data Privacy Bill,” The New York Times, February 27, 2015, http://www.nytimes.com/2015/02/28/business/white-house-proposes-broad-consumer-data-privacy-bill.html?_r=0.

[69] Kevin Taglang, “What Do the Consumer Privacy Bill of Rights and Net Neutrality Have in Common?,” Benton Foundation, March 6, 2015, https://www.benton.org/blog/privacy-and-title-ii.

[70] Brendan Sasso, “Obama’s ‘Privacy Bill of Rights’ Gets Bashed from All Sides,” The Atlantic, February 27, 2015, http://www.theatlantic.com/politics/archive/2015/02/obamas-privacy-bill-of-rights-gets-bashed-from-all-sides/456576/.

[71] Bob Sullivan, “Will the New Consumer Privacy Bill Protect You?,” Money, May 1, 2015, http://time.com/money/3843230/consumer-privacy-protection-act-bill-identity-credit/.

[72] Craig Mundie, “Privacy Pragmatism,” Foreign Affairs, accessed December 29, 2016, https://www.foreignaffairs.com/articles/2014-02-12/privacy-pragmatism.

[73] Bollier and Firestone, The Promise and Peril of Big Data.

[74] “FEMA Recovery Policy 9420.1, Secure Data Sharing — Recovery Policy Sharing Survivor Data with Trusted Partners,” September 9, 2013, pp. 5–6, https://www.fema.gov/media-library-data/1407352097991-13785338d2ae0606987b2259cab33fa0/Recovery%20Policy%20Sharing%20Survivor%20Data%20with%20Trusted%20Partners%20090913.pdf.

[75] Ibid, A-1.

[76] Ibid, B-1.

[77] Ibid, C-1.

[78] As defined by the University of California Santa Barbara Social Computing Group, see http://socialcomputing.ucsb.edu/index8067.html?page_id=14.

[79]NITRDP, Federal Big Data, 9.

[80] President’s Council of Advisors on Science and Technology, NlT for Resilient Physical Systems, 2007, President’s Council of Advisors on Science and Technology, Executive Office of the President.

[81] National Science Foundation and Japan Science and Technolgoy Agency, “Big Data and Disaster Management A Report from the JST/NSF Joint Workshop,” Arlington, VA, May, 2013.i.

[82] National Science Foundation and japan Science and Technolgoy Agency, “Big Data and Disaster Management 2.

[83] Huiji Gao, Jeffrey Barber and Rebecca Goolsby. “Harnessing the Crowdsourcing Power of Social Media for Disaster Relief.” IEEE Intelligent Systems, May/June 2011. 11.

[84] https://www.dhs.gov/our-mission

[85] https://www.dhs.gov/topic/resilience

[86] American Red Cross. Community Resilience Strategy. http://www.redcross.org/images/MEDIA_CustomProductCatalog/m33840096_1-CommunityResilienceStrategy_Jacquie_Yannacci.pdf

[87] Huiji Gao et al. Harnessing Crowdsourcing Power, 11.

[88] Ibid.

[89] Ibid.

[90] Ibid.

[91] Greenleaf, Graham. “Global Data Privacy Laws: 89 Countries, and Accelerating”. Social Science Electronic Publishing, Inc. Retrieved 16 February 2014.

[93] Brendan I. Koerner, “Inside the Cyberattack That Shocked the US Government,” Wired, October 23, 2016, , accessed January 02, 2017, https://www.wired.com/2016/10/inside-cyberattack-shocked-us-government/.

[94] Ellen Nakashima and Karoun Demirjian, “Top U.S. intelligence official: Russia meddled in 2016 election through hacking and spreading of propaganda,” The Washington Post, January 5, 2017, , accessed January 05, 2017, https://www.washingtonpost.com/world/national-security/top-us-cyber-officials-russia-poses-a-major-threat-to-the-countrys-infrastructure-and-networks/2017/01/05/36a60b42-d34c-11e6-9cb0-54ab630851e8_story.html.

[95] US CERT, “Ransomware,” Ransomware, July 11, 2016, accessed January 05, 2017, https://www.us-cert.gov/security-publications/Ransomware.

--

--