Axel Voss’ -German member of the European Parliament focused on data protection legislation and other digital files- 2 weeks ago #GDPR shared a position paper that urgently needs revision.
On February 16, 2021, he decided to try something new and started a field study to hear what people and organizations think about GDPR. He asked them what the problems they faced with GDPR were.
Corporations, citizens, researchers, scientists, nurses, data researchers, protection officers, lawyers, nonprofit associations, sports clubs and much more, with more than 180 responses, a result that paints a very clear picture of GDPR in practice. The points on which the people who gave their opinions agreed were as follows:
“Too complex, increasing compliance costs, shortening other fundamental rights and severely hampering Europe’s digital transformation.”
Axel Voss published the findings of this study in a position paper on May 25, 2021, with practical difficulties with a third view, and expressed them systematically, understandable, and in a simplified way.
You can find the comprehensive position paper with detailed recommendations at the end of the article.
On the other hand, I will make some comments on the findings obtained due to this study and more.
As a result of this study, the problems encountered in practice in GDPR are grouped under the following main headings;
1-‘One-size-fits-all’ approach does not fit all!
The law does not differentiate between different companies (global company/digital gatekeeper to local SME/startup/independent bakery shop) to accommodate different companies. The law also does not distinguish between different sectors (eg healthcare and finance) or different technologies (eg AI or Blockchain) and does not clearly define both. Rather than focusing on basic and well-designed rules with clear definitions.
“Furthermore, it does not distinguish between the processing of personal data by private individuals and by state authorities.”
2-Risk-Based Approach is Missing.
At the end of the day, especially SMEs and start-ups find themselves choosing between high compliance fees and doing nothing. High compliance fees and legal uncertainty push them to do so. At the same time, they feel insecure because it is unclear what kind of evaluation criteria they will face when they face sanctions.
“The GDPR does not differentiate enough between low-risk and high-risk applications, determining — with a few exceptions such as prior consultation of the DPA for high-risk applications — largely the same obligations for each type of data processing.”
What we call risk assessment is not done just in case of data leakage. Companies also need a perspective on what to do in the event of a data leak, so sometimes it’s essential to look ahead.
The sanctions are high, implying that it is not the importance of the value to be protected but the purpose of intimidation. So yes, maybe the deterrent penalties are very high for the big companies like Facebook, Google, which could cause people to be manipulated over data. Still, companies have a separate legal personality from real people, and our equal attitude when imposing sanctions on people does not need to work here.
The range of sanction could be kept wide to overcome this issue, but if the distrust continues and this creates a reservation for new SMEs and start-ups; in which these start-ups are working on topics such as big data, artificial intelligence, then a measure may be taken to prevent these companies from withdrawing their business from Europe.
Perhaps a distinction can be made on a sectoral basis, or the institution may make the risk assessment and there may be different sanctions accordingly. However, even then it doesn’t seem logical to me to put all the burden on the data controller to make this risk assessment and take all precautions.
Instead, at this point, I think it might make sense for the supervisory authority to take some responsibilities and establish control and risk assessment teams, and if necessary, offer this service to SMEs and start-ups for a small fee.
3-Complexity makes compatibility difficult.
According to the general opinions of the people who participated in the study,
“The provisions are too numerous, complex and difficult, allowing only a few distinguished experts to really keep track and understand all of the legal consequences.”
Perhaps the GDPR should define by business groups and the commission should provide detailed and public roadmaps so that start-ups do not walk on a dark path and do not need constant GDPR advice. Because in the current situation, everything is uncertain and we are trying to adapt to it. New problems are constantly emerging, and a start-up needs to get a legal service to carry out its legal activities and consultancy service to manage its compliance processes. These are issues that require expertise and need to be handled very carefully. This means that they spend a lot of money on compliance and legal services rather than product development.
4-Outdated concepts are not compatible with 21st-century technologies.
“To start with, the law is based on the processing of individual data (thus ignoring Big Data) as well as on the processing by a single controller (thus ignoring cloud computing, the Internet of Things, platforms or other complex actors. The GDPR also assumes that data is processed at a specific location on a fixed hard drive (thus not taking into account that data is no longer stored at a physical resource but instead is globally moving from server to server in global networks, interconnected clouds and blockchains).”
The data controller data processor concepts are very confused when it comes to blockchain and cloud computing. For example, this semester, I am taking a cloud computing course in master’s degree and writing an article about the concept of accountability in cloud computing under the title of security and privacy in cloud computing. I think it is difficult to make an assessment as it is necessary to know both the cloud structure and the responsible actors and the relevant laws to discuss this issue. At the same time, I think there is a huge gap in the literature on this subject.
5-Data institutions cannot carry out research activities that require data processing.
“The law does not provide the opportunity for trustworthy third-party agents such as data trusts or a new European agency for data to benefit from more flexibility for an agreed purpose. those institutions could help opening up data silos to SMEs and researchers, facilitating the sharing of confidential and personal data and increasing access to data. The donation of data is also too complicated, if not impossible, under the provisions of the GDPR.”
A professor wrote to me the other day and stated that they wanted to conduct a research activity by collecting publicly available data on social media. He asked whether this activity complied with the Turkish Privacy Act. (KVKK) So I answered from the GDPR perspective because when we looked at the law and decisions, Turskish Privacy Act also contains GDPR’s ground rules and principles, and also we can say the Turkish Supervisory Authority shares the European Data Protection approach. KVKK is actually more flexible in terms of processing if the person has made his/her data public.
However, in any case, the purpose of putting that information there is also important. For example, if biometric data is put there, it is not possible to process it for a different purpose, even if it is public.
For example, if these people are trying to develop machine learning algorithms, AI systems, what should they do to find research data?
6-Disproportionality with other fundamental rights.
“GDPR does not take into account that the processing of personal data by the controller is, in itself, also protected by fundamental rights (e.g. the freedom of science or the freedom to conduct a business).”
GDPR does not define data protection as an absolute fundamental right and does not allow us to give a clear answer or comment on what happens when it conflicts with other fundamental rights such as the right to life, liberty, and security, freedom of business, freedom of the press. In addition, data transfer is allowed only if there is a legal basis in the law. This provision gives us a perspective on data processing and data transfer in Europe.
So, is it possible to ensure the free movement of data in Article 1(3), which is said to be one of the main aims of GDPR in a data-driven economy?
7-Scope of protection is too wide.
“GDPR postulates in Art 1(2) that it protects “fundamental rights and freedoms” of natural persons. However, if the law wants to protect all rights and freedoms, it leads to excessive demand on controllers, as they would theoretically have to take all fundamental rights and freedoms into account, in all 68 obligations and in all 82 balancing tests of the GDPR. This can never be fulfilled in practice.”
8- Need for justification for every type of personal data
“The GDPR, however, does not contain a coherent concept of how and when the data protection right is lawfully limited. The rights and interests that conflict with the data protection right are listed in a rather fragmented and erratic manner. The difficulties during the COVID19 pandemic have put a spotlight on this issue.”
“The GDPR provisions on purpose limitation and data minimisation as well as the restrictions on secondary use can be seen as the major obstacles for AI.”
Because of the very narrow interpretation of purpose limitation, researchers and companies often need to obtain each data subject’s consent before doing anything new with their data. However, because “explicit consent” is fragile in nature, this makes it difficult to maintain consent and prevents researchers and companies from experimenting with their own algorithms even when using it for any other purpose that would not affect consumer well-being or privacy.
2-Internet of Things
“The GDPR principles of storage and purpose limitation and especially data minimisation are also difficult to implement. On the contrary, the Internet of Things is based on ‘data maximalism’, meaning the collection of vast amounts of personal data, the creation of unique user profiles and the scanning of devices”
As such, this situation causes a contradiction in itself, and in interconnected systems, it may be impossible to obtain valid consent all the time since people in these systems cannot always be active users who can accept consent forms, otherwise, it will make it impossible to use this product.
“The GDPR links the processing of data either to a single person in charge (Art 4 Nr 7 GDPR) or determines special provisions for situations with multiple persons (Art 26 or 28 GDPR). Both approaches are not sufficient for cloud computing. This problem is aggravated by the fact that multiple parties are involved without clearly assigned qualifications and that data is constantly moving within interconnected clouds while temporarily stored on different physical locations in different countries.”
“One key property of Blockchain technology is that old data can be secured against modification, making it an append-only structure where new data can be added but never removed. Thus, blockchain runs counter to the GDPR’s ‘right to be forgotten’. Once personal data is recorded in a decentralised block, it is no longer possible to delete that information. This historical data can then be analysed to reveal identities.”
Blockchain technology is also important in that it has a structure that can override all these regulations at once.
The reason is exactly the link you will see below;
As I research continuously, I cannot help but say that I admire blockchain technology and what can be done with it.
I don’t know if you’ve read George Orwell’s 1984 dystopia, but making people forget their history is a very effective way of manipulating societies. And with censorship, you prevent him from getting information about today.
In my opinion, these are the biggest tools for the manipulation of communities together with the media. But as long as the knowledge accumulation cannot be stopped and as long as its integrity is not broken, only this data will be processed and meaningful conclusions will be drawn: Here, there will be no data that is subsequently corrupted or manipulated.
But on the other hand, in this scenario, we will be talking about a snap decision made by the majority of the population because it means that 51 percent of the nodes will be correct, i.e. what the majority will say or what the majority say will be considered correct.
⚡️This points to a frightening future, as the “super-intelligence” that can play with these nodes also means that they can use blockchain to accredit their facts in the public eye with a reliable source.
We can think of it as NFT technology. Every NFT is claimed to be the original and only one, but is it? Or does the authenticity of the product come from the first signer of the product to be approved? But what if the first signed product is fake?
Follow us, and keep updated!
Regulating Privacy and Data Protection issues through Anti-Trust: Co-operative fantasy or…
The arrival of Big Data, Machine Learning and the widespread use of Data Analytics has become the framework of the…