🗣️ Facebook Will Hack Your Brain Waves

Cause why wouldn’t they? Algorology Newsletter #4

Facebook already listens to your conversations through their mobile app (yes, they do have a patent on that, google it) so why wouldn’t they listen to your brainwaves?

Apparently, they have some big plans for the future of social networking. A series of job postings discovered Thursday suggest the company wants to use non-invasive techniques to measure users’ brain waves, with artificial intelligence decoding the data.

The listings are all for the company’s Building 8 lab, a secretive organization headed by the former head of the Defense Advanced Research Projects Agency (DARPA).

I do find this fascinating since it’s the recent advances in the pattern recognition and unsupervised learning that are driving the whole machine learning boom. And these advances make it totally possible to try and create some kind of novel non-invasive method that doesn’t rely on outdated fMRIs and hours of trying to teach the brain-computer interface to understand basic things, which is the norm right now.

The argument is, do we want to have such tech in the hand of a company that engineers us into clicking ads?

As humans we are very easily manipulated by Facebook’s current algorithms and I can only imagine how bad it will be once they can at least understand your current mood and that you need an extra pump of kitten pictures in your feed… definitely thought-provoking.

$27M fund to protect humanity from harmful AI 💸

LinkedIn founder Reid Hoffman and eBay founder Pierre Omidyar are backing a new $27m academic research fund to protect society from destructive artificial intelligence and advance the technology in public interest.

The initiative is called the Ethics and Governance of Artificial Intelligence Fund and will support a variety of AI ethics and governance projects in the US and internationally.

“There’s an urgency to ensure that AI benefits society and minimizes harm,” Hoffman said. “AI decision-making can influence many aspects of our world — education, transportation, health care, criminal justice, and the economy — yet data and code behind those decisions can be largely invisible.”

😶 How Much Money Will Your Robot Make?

And American tech billionaires are not the only ones concerned about actions and responsibilities of artificial intelligence.

The European parliament has urged the drafting of a set of regulations to govern the use and creation of robots and artificial intelligence, including a form of “electronic personhood” to ensure rights and responsibilities for the most capable AI.

EU is famous for it’s obsession with regulations and I think it’s not a bad thing, especially when it’s related to something that we need more public discussion about.

The proposed legal status for robots would be analogous to corporate personhood, which allows firms to take part in legal cases both as the plaintiff and respondent. “It is similar to what we now have for companies, but it is not for tomorrow,” said Delvaux. “What we need now is to create a legal framework for the robots that are currently on the market or will become available over the next 10 to 15 years.”

I’m especially amazed by the number of areas that EU is aiming to address, as much as it may sound unnecessary and complicated it actually brings up a lot of questions.

Here are some of the main points:

  • The creation of a European agency for robotics and AI;
  • A legal definition of “smart autonomous robots”, with a system of registration of the most advanced of them;
  • An advisory code of conduct for robotics engineers aimed at guiding the ethical design, production and use of robots;
  • A new reporting structure for companies requiring them to report the contribution of robotics and AI to the economic results of a company for the purpose of taxation and social security contributions
  • A new mandatory insurance scheme for companies to cover damage caused by their robots.

More analysis here.

🌎 Will Machines Outsmart all of Us?

What if they will? That’s actually a very provoking question coming from the World Economic Forum Annual Meeting 2017 in Davos.

Do we actually have a framework in place for that? I don’t think so. The question lies in our ability to adapt and the economic benefits technologies bring in.

Though the article poses a good discussion on the fact that right now the discussion of AI is broken down into a binary choice between technological progress and negative outcomes, such as elimination of jobs, but the trade-offs are much more complex.

Throughout human history, whenever progress has been made, there has been disruption. Overall, technological advance — and the change it ignites — has driven the world forward.
As AI comes to play a bigger role in our future, we are facing profound questions about disruption and economic dislocation.

Originally published through Algorology email newsletter. If you would like to receive it before anyone else — signup below.