AI Privacy is a Safe Place — That Requires Validation

Paul Houghton
n-of-1
Published in
2 min readDec 10, 2019

“Privacy is the right to a free mind” -Snowden

Privacy in AI-based digital services is all about creating a place where your mind has no worries. Modern, increasingly invasive services need to earn that trust. This translates to the strongest possible guarantees that the information you are receiving is unbiased and lacks a manipulative agenda.

A great service lets you play with and try out new ideas. This is done in your own way and time before optionally sharing the result with others. It requires absolute privacy from outside inspection, and no surprises about how your information is used. In this safe place, the AI assistant serving you can tailor its advice advice based on private information. Trust flows to the corporate service because it has been earned.

Everybody lies.” -Dr House

Trust is not a security policy. It is not a privacy policy. Customers trust no one without reason. As the Russian proverb goes: Доверя́й, но проверя́й — “trust but verify”.

An increasing number of services have enough private data to infer your personal thoughts, interests and emotions. Sunshine, in the form of 3rd party validation, is the best disinfectant to remove privacy surprises and ensure that the service is trustworthy. Security consultants and organisations can validate the policies behind a given service and provide their stamp of approval.

But remember that modern AI services are trained, not programmed. Certification of trust requires a deep mix of skills in both classical and trained business logic. So the ultimate guarantee is openness of both code and training data to public and/or third party inspection.

Without openness, individual liberty of thought in an AI world will always be at risk.

--

--