Insurance, Big Data and The Law — An Interview with Philip Bitter

I recently had the pleasure to sit down with Philip Bitter to get a legal experts’ perspective on big data and insurance. Philip is a research associate at the Institute for Information, Telecommunication, and Media Law, at the University of Münster and he has contributed extensively to the interdisciplinary ABIDA (Assessing Big Data) project, which is funded by the German Federal Ministry of Education and Research.

We met after his presentation at the German Insurance Forum in Leipzig, to talk about data governance, data allocation and quality, the GDPR, the latest Facebook hack and more.

The interview has been lightly edited for clarity.


“Navigating the road between technical possibilities and legal risks therefore still primarily means safety first.”

Albert: Companies are now required to put Data Governance at the core of their data analysis strategy. What is your view of how insurers are currently navigating the road between technical possibilities and legal risks?

Philip: Most insurers, as the expert discussions within our ABIDA project have shown, already seem to be well aware of the importance of a reliable data governance strategy. They know very well that they must not jeopardise the trust of customers, as insurance services are largely based on the processing of personal data. Navigating the road between technical possibilities and legal risks therefore still primarily means “safety first”. There is good reason why German insurers have again imposed on themselves a Code of Conduct for dealing with personal data under the GDPR. To this extent, the situation may be different for companies in other industries. For many companies, however, it is in their own best interest to establish a data governance strategy. From a business point of view, this may become even more necessary when business models are increasingly data-driven and data as information is a key economic factor.

“So, what remains is to lay down the rights and obligations in contractual detail until the courts or the legislator take appropriate action.”

Albert: Your talk at the German Insurance Forum on Big Data & Analytics centred around legal aspects of big data. What are the key legal challenges for insurers as they specifically relate to data allocation and quality?

Philip: Since information is the key economic factor for businesses in a data-driven economy, and especially for data-intensive services such as insurance, access to data is one of the key issues related to insurance and big data. This concerns access rights to data which currently barely exist for insurers due to a lack of specific regulation, such as in the case of the connected car, for example. Currently, contractual agreements with factual data “owners” can be concluded. There are considerable legal uncertainties because they only apply between the parties. A good example of that is car insurance and data from a connected car as mentioned above.

There is intensive discussion as to whether vehicle data should continue to be managed and distributed via the OEM or whether a neutral trustee should be set up. Approaches for a trustee are discussed in German law primarily in the context of Sect. 63a, 63b StVG, the Road Traffic Act. The main arguments put forward against this are data protection and antitrust concerns, which must, however, be critically examined.

With regard to data quality, the key challenge is: “Garbage In, Garbage Out”. Data quality is a decisive factor for the use of big data analyses, as otherwise there is a risk that “wrong” assumptions can be made on the basis of supposedly “correct” working algorithms. In case of doubt, this not only has consequences for the insured person in terms of the individual tariff, for example, but also for the insurer in economic terms. All the more dramatic is the fact that hardly any reliable data quality standards exist. However, exceptions apply within the framework of Solvency II, which sets requirements, for example, for the appropriateness, completeness and accuracy of data in the context of technical provisions under insurance supervisory law.

Since, from a civil law perspective, the applicable law is otherwise only able to respond to the ubiquitous data traffic to a limited extent, the contractual agreements in individual cases are all the more important. So, what remains is to lay down the rights and obligations in contractual detail until the courts or the legislator take appropriate action.

“Big data and data minimization do not really seem to fit together…”

Albert: Some of the new compliance requirements seem to be very difficult to implement in practice. Where do you see the biggest gap between legal requirements and technical feasibility?

Philip: As far as data protection and big data are concerned, the biggest gaps can probably be seen in the principles of data minimization, purpose limitation and data protection by design and by default. Many people in the industry lament a lack of guidance. Big data and data minimization do not really seem to fit together (depending ones’ interpretation) so that the question as to whether the principle of data minimization is not a data protection relic, must also be allowed. At the same time, however, the principle also represents a decisive element of European data protection law. Data protection by design and by default are definitely promising approaches, but they need further elaboration in order to set reliable standards.

Albert: The GDPR introduces the concept of data minimization. Could you explain what this concept aims at exactly and how it is currently defined?

Philip: According to the GDPR, personal data shall be adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed. It may also be borne in mind that the Data Protection Directive 95/46/EG already stated that data must not be excessive in relation to the purposes for which they are collected and/or further processed. Thus, it is not really an introduction, although the wording of the GDPR and the interaction with data protection by design and by default are apparently accompanied by a tightening of requirements.

“…the GDPR should not create too much confusion.”

Albert: Many insurers I have spoken to about the GDPR echo the same response, namely uncertainty and confusion. Do you think this is justified? And, doesn’t the GDPR also bring about positive impulses?

Philip: Of course the GDPR brings about several novelties, but many topics addressed were already regulated after the implementation of the Data Protection Directive in the national data protection laws. That is why the GDPR should not create too much confusion. However, now that the GDPR is directly applicable in the Member States, it creates the legal data protection framework for a digital single market and thus can also have a positive impact on further markets. This may also be evidenced by recent reports that the US government is considering a response to the GDPR.

“Speaking in general terms, the GDPR, despite all the criticism, has increased the awareness of data protection and security aspects.”

Albert: The latest Facebook hack could become the EU’s first big online privacy legal battle under the GDPR. Should we be watching closely to get an idea of how legal precedent is established going forward? What’s your view?

Philip: Though details remain to be seen, yes, the recent incidents and how they are handled could indeed have a signal effect for several businesses. Speaking in general terms, the GDPR, despite all the criticism, has increased the awareness of data protection and security aspects. This applies both to the data subject in general and to data-processing companies in particular. Further developments therefore need to be closely monitored in order to get a first impression of how regulators react to data breaches, for example.


I hope you have enjoyed this article. For questions or comments, get in touch via Email info@connectingthedots.cx or Twitter @cngthedots or LinkedIN.