Frameworks needed for secure data collaboration

Nick Halstead
InfoSum
Published in
2 min readJun 23, 2017

GDPR should be seen as a “call for changing how we view data” rather than a regulation, was one of the key assertions of the Data Privacy, Data Ethics and GDPR panel at CogX17 London.

Few companies actually know what happens to customer data when they share or sell it to third parties. They have no control over where the data goes after it leaves the organisation and where it could be moved onto next.

However, under GDPR, if a customer withdraws consent, it will be the responsibility of the company to ensure all third parties also stop processing this data. To the extent that the original company will be hit with the same non-compliance fine (up to €20 million or 4% of annual global turnover, whichever is higher).

A topical example is the controversial partnership between Google’s DeepMind and Royal Free London NHS Foundation — all for not gaining explicit consent to share health records. Although the possibilities for advancing medical research are vastly increased by being able to share medical data, this data is very sensitive and it is vital that best practices are being used.

To tackle this, as I said during the panel, we need new frameworks for secure data collaboration.

I’m a big believer in aggregated data, as it enables companies to analyse and segment customers without needing to be focused on the individual, therefore keeping privacy intact.

The decentralisation of models is set to completely disrupt how data is handled and will be a game-changer for increasing data privacy.

In addition, the panel’s chair, Ian West, posed the question of whether GDPR would stifle innovation and the consensus was no. I believe that whilst GDPR will bring short-term pain, it will also bring long-term competitive advantage.

The legislative changes will build trust between customers and businesses, which will result in closer relationships and increased loyalty. In addition, GDPR has spurned a whole host of innovative startups in Europe looking to help companies tackle this very problem — who are now ahead of companies in the US and Asia.

Lastly, the world of AI needs more data to train models and grow — but we need to ensure robust frameworks are being used as more and more data is implemented. I firmly believe that the best practice for obtaining and analysing higher quality datasets is through decentralised and aggregated data models.

Enjoy reading this? Join my weekly data digest. 💌

--

--