A ‘shrink’ AI and the future of privacy in the world of IoT

Vertical AI development provides a great benefit — and poses a threat if personal data is not orchestrated and managed properly

BBC published a story about researches polishing an algorithm that picks up situations of intra-marital infidelity, overall marital problems that couples try to mask — or don’t yet discern in their stressful lives.

Living in a world of automatons, this algorithm can be in future commercially shared or bought by a corporation to be part of the daily routine — us receiving marital couselling or sexual education classes from a humanised machine.

In a world of software updates — with our gadgets receiving new skills to put array of their sensors to better use, imagination becomes the only limit of how machine-human interaction can evolve.

Download a (paid) software skin to drive like Senna. Adjusting the grip car wheels, suspension, cornering, braking — even the shape and settings of car seats is no longer a deranged fantasy.

Receive counseling from a famous psychiatrist aided by AI — or by AI with the voice of a famous psychiatrist’s voice. The divide is narrowing.

If only the future could be so promising. Gadgets and software platforms launching viral effects with people giving second thoughts to sharing their personal data, pictures, hidden gestures etc — amass a trove of data used primarily to polish ads presentment and — developing constant tracking tools.

It’s one thing to believe latest advancements in data governance from the law perspective would aid to wrestle these into submission. The urge for state authorities to have backdoors, nor report them and misuse them for state surveillance purposes is an alarming trend: moreover these intrusions are done secretly as part of clandestine apparatus.

For example, in a recent book on cyber-intrusions and weaponry (read more here), a number of cases manifest a new great threat against the individual:

To obtain a security clearance from the US government, prospective federal employees and contractors have to fill out an exhaustive 127-page form — Standard Form 86 — in which they list every personal detail about their lives. Every bank account, every medical condition, every illegal drug they used in college.

At some point during the summer of 2014, the SF-86 forms for 21.5 million people were copied from OPM’s network. By December, 4.2 million personnel files — covering 4 million current and former federal employees, with their Social Security numbers, their medical histories, and their marital status — had been stolen. And by March 2015, 5.6 million fingerprints had been copied and spirited away.

Our daily interactions with our devices have even more of the most intimate details one would prefer not to share with the open world — or a malign actor.

Self-sovereign identity, private enclaves and API governance:

  • As we develop vertical AI and put them on platforms developing services to approbate, improve and host these — identifiable data should be encrypted using latest standards with recourse capability;
  • The hub that manages the use-cases — a bridge between the code backend and the user can take more power in terms of processing and securing the personally identifiable information (giving further to the online provider only a one-off shake with a token);
  • The overall increase in processing power available locally paves way to secure enclave controlling not just your passwords and credit cards, but your preferences and identifiable crumbs.

The outtake:

We live in a world of information flows: where information is a force of good and the leverage for personal, intra-state and inter-state relations. As the economic modus operandi is primed at getting more information to lower the cost of infrastructure, improve algorithms that run on it — and overall improve sales — we are happy to give them the benefit of the doubt.

Provided that we feel it is used justly (and the world of today shows it often isn’t) — we happy to overshare and forget that everything we say is then extremely hard to purge from the system.

The system wants more from us, of us, about us. It uses tricks to get us to behave it wants. The researchers who’ve produced the NBER paper in 2017, studied “distortions in consumer behavior when faced with notice and choice which may limit the ability of consumers to safeguard their privacy using field experiment data from the MIT digital currency experiment”.

There are three findings. First, the effect small incentives have on disclosure may explain the privacy paradox: Whereas people say they care about privacy, they are willing to relinquish private data quite easily when incentivized to do so. Second, small navigation costs have a tangible effect on how privacy-protective consumers’ choices are, often in sharp contrast with individual stated preferences about privacy. Third, the introduction of irrelevant, but reassuring information about privacy protection makes consumers less likely to avoid surveillance, regardless of their stated preferences towards privacy.

The world of symbols and emotions creates a fallacy of us giving more into the system to contextualise the automaton’s rules — while often receiving fake ones. It creates anxiety, leads to distorted life online and offline — and contributes to personal problems that now IoT strives to solve.

Only with proper standard-based governance we would be able not to lose ourselves.