Conscientious AI: Machines in Aid of — not in place of — Humanity

Consensus AI
4 min readSep 4, 2018

--

Much like any science, there are ethical concerns surrounding machine learning. And much like any promising technology, these risks should be objectively assessed, weighed against the benefits, and managed; if humanity were to drop any convenient advancements as soon as any risks and ethical issues presented themselves, we never would have made it past fire.

The public’s concerns regarding artificial intelligence are of course understandable, although quite honestly, overblown.

The balance: benefits outweigh risks

Blockchain technology and artificial intelligence are two of today’s most disruptive developments, and their amalgamation is even more immense.

Simulations have long been used for scientific research, but now we will be able to deliver superior intelligence to real-life, real-time, human conditions. Machine learning has been making strides in different industries. Paired up with the decentralization presented by blockchain technology, the possibilities are limitless. We now have the entire world as our computer, and it can handle vast amounts of complex computations to help solve the world’s biggest ills.

While we do not advocate the complete surrender of decisions to machines, we recognize the historically proven fact that there are several aspects that machines do better. Similar to how calculators and other technological advancements freed up a lot of time and energy for humans to devote to farther reaching, monumental feats, machine learning can take the guesswork and speculative debate out of the equation; this allows organizations to focus on making better decisions based on highly accurate simulations and is one of the many testaments to the fact that automation is not an enemy, but an ally to modern society.

In governance, the consequences of oversight are far-reaching because the gears that hold societies and governments together are interconnected. Much like a natural ecosystem, the introduction of certain solutions within a territory can have negative consequences in other aspects of the society. Some of these sensitive decisions exist in issues of public health, traffic congestion, and urban planning, in these instances, what may be helpful for one may be harmful to another.

For example, implementing a no-single driver rule on a major highway would push vehicle owners to carpool, which could lessen the number of cars, and therefore, traffic volume on that road; this is the ideal outcome. But would that be the real-world consequence, or would drivers just move the heavy traffic volume to alternate routes where they are allowed to drive alone?

Questions like that should no longer be left to trial and error, especially now that we can make simulations through a virtual lab rather than the real world. AI’s role in governance is primarily to aid. AI is not a stand-alone solution, but rather an “assistant,” a smart advisor to human custodians in government.

In an interview, computer scientist and University of London senior lecturer Dr. Andrea Calí pointed out: “Governance needs to start from a higher ground, which involves principles and ultimately, moral values.”

Machines going rogue?

Apart from being rendered jobless, one of society’s biggest AI fears stems from the idea of machines going rogue and bringing humanity into extinction. But industry experts disagree with that pop culture representation of AI’s possibilities; AI cannot become sentient to the point of murdering human beings for self-preservation. The TV series Westworld comes to mind, and even though it’s supremely captivating, it is what it is — fiction.

Dr. Calí expounded on the difference between real-world AI and the stuff seen in science fiction.

“While AI can provide relevant information that would be difficult (or impossible) to discover manually, AI is not real intelligence –the issue is an important one and I agree with John Searle (see his Chinese Room argument), Robert Koons, David Bentley-Hart, Edward Feser etc. that no machinery can have any proper intelligence, let alone consciousness.”

We do have to acknowledge the public’s concerns, of course, and ensure that we address them appropriately. In today’s state, the biggest risk is that of “flawed AI” rather than them superseding humans.

Kay Firth-Butterfield, head of the AI program at the World Economic Forum, is working with governments and other groups to work out the challenges of AI — the flaws in design or data that could create biases of potentially fatal consequences.

Some of the other ethical challenges for AI range from prejudices being imbued into the algorithm, standards for privacy, to deliberating the decisions AI should make in critical moments such as car collisions in self-driving cars.

“It’s really important that we know that there are all these different tensions, because without addressing them, we are really left with, I suspect, a failing trust in the technology,” Firth-Butterfield said in an interview with Business Insider. “What I certainly don’t want to see are all the benefits of AI somehow being lost because we haven’t put in the ethical underpinnings to help the public know that we’re doing something safe.”

Consensus AI aims to deliver this super intelligence for humanity’s maximum benefit. And being human custodians to an AI system ourselves, it is our job to ensure that the industry moves forward in a safe, ethical, transparent manner. We intend to take part in building and upholding the standards that the AI industry will be held to.

--

--