Toward a New Approach to Data Protection in the Big Data Era

This essay first appeared in the Internet Monitor project’s second annual report, Internet Monitor 2014: Reflections on the Digital World. The report, published by the Berkman Center for Internet & Society, is a collection of roughly three dozen short contributions that highlight and discuss some of the most compelling events and trends in the digitally networked environment over the past year.

The complexity of data processing, the power of modern analytics, and the transformative use of personal information drastically limit the awareness of consumers about how their data is collected and used, diminish their capability to evaluate the consequences of their choices, and preclude their ability to give free and informed consent. Moreover, market concentration and related social and technological lock-ins often exclude an effective negotiation of personal information.

These elements lead us to reconsider the role of user’s self-determination in data processing and the “notice and consent” model.

My suggestion is not to change the entire traditional model of data protection, but to reshape it with regard to the big data context, where asymmetries in data negotiation drastically reduce users’ self-determination.

From this perspective, in the following paragraphs I propose a new model for big data uses, which is based on two fundamental pillars: the definition of a rigorous multiple impact assessment of data processing, widely adopted and publicly available, and the adoption of an “opt-out” scheme.

In the presence of complex data collection and processing systems influenced by lock-in effects, such an impact assessment should not be conducted either by consumers, or by companies. It should be conducted by third parties, under the supervision of national data protection authorities (hereafter DPAs) that define the professional requirements of these third parties.

DPAs, rather than users, have the technological knowledge to evaluate the risks associated with data processing and are in the best position to balance the interests of different stakeholders.

In the suggested model, companies that intend to use big data analytics should undergo an assessment prior to collecting and processing data. The assessment would not only focus on data security, but also consider the social impact and ethical use of data in a given project.

The entire system would work only if the political and financial autonomy of DPAs from both governments and corporations is guaranteed. Moreover, DPAs would need new competence and resources in order to bear the burden of the supervision and approval of these multiple-impact assessments.

In the light of the above, a model based on mandatory fees — paid by companies when they submit their requests for authorization to DPAs — would be preferable. This solution provides proportionate resources to authorities without being influenced by the companies under their supervision.

It should also be noted that, in cases of large scale and multinational data collection, forms of mutual assistance and cooperation may facilitate the role played by DPAs in addressing the problems related to the dimensions of both data collection and data gatherers.

However, wider cooperation at global level is difficult to realize, despite the presence of international fora, in which the issues related to data protection are discussed (e.g. APEC, Council of Europe),. This is due to the absence of an effective common legal framework and the presence of cultural and legal differences that often affect social and ethical assessments.

Finally, with regard to the decision-making process, a common general model of multiple risks assessment, articulated in different stages, can be adopted. It is not possible, however, to apply a single set of criteria in all cases of data processing.

Nevertheless, general standards and criteria can be adopted with regard to different data processing endeavours relating to the same areas (e.g., healthcare, geolocation, direct marketing). This is consistent with a context-based approach (e.g. concepts of necessity and proportionality) and with models based on co-regulation, which have been adopted in the EU and other countries.

Once this multiple-impact assessment is approved by DPAs, the related data processing is considered secure in terms of both protection of personal information and social impact. As a consequence, companies can enlist users in the data processing without any prior consent, provided they give notice of the results of the assessment and provide an opt-out option.

This assessment represents an economic burden for companies, but allows those who pass to use data for complex and multiple purposes, without requiring users to opt-in.

From the users’ side, the assessment supervised by DPAs provides an effective evaluation of risks, while the option to opt out allows users to choose to not be a part of the data collection.

The suggested model represents a significant change in the traditional approach to data protection.

For this reason, it is necessary to provide a subset of rules for big data analytics, focused on a multiple risk assessment, a deeper level of control by DPAs, and the opt-out model.

From the behavioral and cultural perspective, this new approach would have a greater impact on companies and DPAs than consumers.

Consumers will benefit from a more secure environment, while corporations and DPAs will have to invest more resources in risk assessment and acquire specific competence.

This approach will create a safer environment for consumers, particularly as society looks toward a future characterized by the roles played by big data, expert systems, and artificial intelligence.

This perspective makes it easier to envision a future scenario in which privacy-oriented and trustworthy services increase a user’s propensity to share data and stimulate the digital economy and fair competition.

--

--

alessandro mantelero
Internet Monitor 2014: Data and Privacy

Alessandro Mantelero is Associate Professor of Private Law at the Polytechnic University of Turin and Council of Europe Rapporteur on AI and data protection.