News — At The Edge — 7/22

Doc Huston
A Passion to Evolve
7 min readJul 22, 2017

--

If you do nothing else watch the Trump video. After watching it you will know why Trump is afraid of the exploding Russia scandal. My most recent article , Trump — Conspiracy & Obstruction to Precipice of Treason, addresses issues raised in the video.

Cumulatively, the artificial intelligence video and articles point to why you should be really concerned about what is happening. Beyond the obvious the coming massive job loss and economic dislocation, the really big story is that China just announced it intends to dominate AI development.

This may be the biggest event in history because it means the global arms-race to develop artificial general intelligence (AGI) — machines smarter than any human — has just gone into overdrive. Odds are this will not end well!

Three of my articles address these AI issues: Our Twilight Zone & What Comes Next, Artificial Intelligence (AI) Community Is Playing A Risky Game With Us, and Why You Should Fear Artificial Intelligence-AI.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Thought-Provoking Issue –

Trump’s Russian Laundromat — Charlie Rose (15 min. video) This is a MUST see video

07/14/2017 Craig Unger discusses his new piece in The New Republic, “Trump’s Russian Laundromat.” A solid case is laid out that Trump’s success, both as a businessman and in winning the presidency, is due to the laundering of money through his real estate by the Russian mafia.

Artificial Intelligence Issues –

Will we be wiped out by machine overlords? Maybe we need a game plan now — PBS Newshour video 6.5 min.)

Tech luminaries and scientists have worried for years about the existential consequences of artificial intelligence. Oxford University’s Future of Humanity Institute thinks money ought to be invested in how to manage machine superintelligence that could one day surpass us — or even wipe us out

Beijing Wants A.I. to Be Made in China by 2030 -

“[China] development plan…to become the world leader in A.I. by 2030, aiming to surpass its rivals technologically…[and] ensure its companies, government and military leap to the front of the pack…[with] a multibillion-dollar national investment initiative to support ‘moonshot’ projects, start-ups and academic research in A.I….

Trump administration has suggested slashing resources for…agencies that have traditionally backed research in A.I….[and] areas like high-performance computing…[affect] tools that make A.I. work….

[When] Google brought AlphaGo to China…[and] it defeated the world’s top-ranked player, Ke Jie of China…[it was] a sort of Sputnik moment for China….China’s ambitions with A.I. range from the anodyne to the dystopian…[and] calls for the technology to work in concert with the country’s homeland security and surveillance efforts.

China wants to integrate A.I. into guided missiles, use it to track people on closed-circuit cameras, censor the internet and even predict crimes…[and] set off alarms within…[U.S.] defense establishment….

[China] expects its companies and research facilities to be at the same level as leading countries like the United States by 2020. Five years later, it calls for breakthroughs in select disciplines within A.I…[and] by 2030, China will ‘become the world’s premier artificial intelligence innovation center.’” https://www.nytimes.com/2017/07/20/business/china-artificial-intelligence.html

What an Artificial Intelligence Researcher Fears about AI -

“HAL 9000 computer…[is] example of a system that fails because of unintended consequences…[like] many complex systems…Titanic, NASA’s space shuttle, the Chernobyl nuclear power plant…[where] designers may have known well how each element worked individually, but didn’t know enough about how they all worked together…and could fail in unpredictable ways.

In each disaster…a set of relatively small failures combined together to create a catastrophe…[and] how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system…without understanding intelligence or cognition first….

[A]s AI designs get even more complex and computer processors even faster…[this] will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that ‘to err is human,’ so it is likely impossible for us to create a truly safe system…[and nothing] prevents misuse…a moral question, not a scientific one….

As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part…because we don’t yet know what it’s capable of. But we do need to decide what the desired outcome of advanced AI is….

[Perhaps] all human jobs will be done by machines…[including] machines tirelessly researching how to make even smarter machines….In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer….[Again] not a scientific issue…[but] a political and socioeconomic problem…we as a society must solve….

There is one last fear…[that] AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence…. I find it hard to make a compelling argument for all of us.

When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist….

We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist….But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we don’t find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few.” https://www.scientificamerican.com/article/what-an-artificial-intelligence-researcher-fears-about-ai/

Elon Musk just told a group of America’s governors that we need to regulate AI before it’s too late -

“Musk [fears]…[AI] machines taking over their human creators…[and] the ‘biggest risk we face as a civilization’ when he spoke at the National Governors Association [meeting]…[wanting] government to proactively regulate artificial intelligence before things advance too far….

AI is a rare case where I think we need to be proactive in regulation…[because]by the time we are reactive in AI regulation, it’s too late….[Normally] regulations are set up…while bunch of bad things happen….’It takes forever…[and] in the past…not something which represented a fundamental risk to the existence of civilization.

AI is a fundamental risk to the existence of human civilization.[First] would be to try to learn as much as possible, to understand the nature of the issues’….[The] biggest risk to autonomous cars is a ‘fleet-wide hack’ of the software controlling them.” https://www.recode.net/2017/7/15/15976744/elon-musk-artificial-intelligence-regulations-ai

Political Issues –

Please Prove You’re Not a Robot -

“Robots are getting better, every day, at impersonating humans…[with] opportunists, malefactors and…nation-states [posing]… threat to democratic societies….

Twitter is particularly distorted by its millions of robot accounts…[and] Facebook has admitted it was…hacked during the American election…[with] ‘junk news was shared [widely]…in the days leading up to the election’….

[Bots] seem human enough to ‘pass’: a name, perhaps a virtual appearance, a credit-card number and, if necessary, a profession, birthday and home address….[But] problem is almost certain to get worse…[and] better at mimicking humans….[Since] product reviews have been swamped by robots…commercial sabotage in the form of negative bot reviews is not hard to predict….

[The] defenses, usually in the form of Captchas…perversely reward whoever can beat them….[T]he greatest problem for a democracy is that companies…lack a serious financial incentive to do anything about matters of public concern….

[These] impersonation robots should be considered…enemies of mankind, like pirates and other outlaws…[so] ideal anti-robot campaign would employ a mixed technological and legal approach. Improved robot detection might help us find the robot masters…[and] deputizing private parties to hunt down bad robots..[with] law that makes it illegal to deploy any program that hides its real identity to pose as a human.

Automated processes should be required to state, ‘I am a robot’…. Using robots to fake support, steal tickets or crash democracy really is the kind of evil that science fiction writers were warning about….[W]hen support and opinion can be manufactured, bad or unpopular arguments can win not by logic but by a novel, dangerous form of force — the ultimate threat to every democracy.” https://www.nytimes.com/2017/07/15/opinion/sunday/please-prove-youre-not-a-robot.html?ref=opinion

Russell Conjugation -

“We are told that we are entitled to our own opinions but not our own facts…[but] the war for our minds and attention is now increasingly being waged over [feelings]….

[T]he quest to control information has largely been lost by institutions, with a race on to weaponize empathy by understanding…and tweaking the social media algorithms….

[I]t is not that we don’t have our own opinions so much as that we have too many contradictory ones, and it is generally our emotional state alone which determines on which ones [win]….

Russell Conjugation…[shows] how our rational minds are shielded from understanding the junior role factual information generally plays relative to empathy in our formation of opinions….

[Seems] most words and phrases are actually defined not by a single dictionary description, but rather two distinct attributes: I) The factual content of the word or phrase. II) The emotional content of the construction…[means] synonyms for a positive word like ‘whistle-blower’ cannot be used in its place as they are almost universally negative (with ‘snitch,’ ‘fink,’ ‘tattletale’ being representative examples)….

[T]he human mind is constantly looking ahead well beyond what is true or false to ask ‘What is the social consequence of accepting the facts?

Find more of my ideas on Medium at, A Passion to Evolve.

Or click the Follow button below to add me to your feed.

Prefer a weekly email newsletter — no ads, no spam, & never sell the list — email me dochuston1@gmail.com with add me in the subject line.

May you live long and prosper!
Doc Huston

--

--

Doc Huston
A Passion to Evolve

Consultant & Speaker on future nexus of technology-economics-politics, PhD Nested System Evolution, MA Alternative Futures, Patent Holder — dochuston1@gmail.com