News — At The Edge — 9/23

Doc Huston
A Passion to Evolve
8 min readSep 23, 2017

--

This week there are three sets of articles and a video.

From the past — the death of a Russian soldier who prevented nuclear war by ignoring a computer telling him they were under attack. An extraordinary feat — ignoring the machine, military orders and training, and risk of being “dead” wrong — unlikely to occur again; or if artificial intelligence was in control.

From the future — Baby X simulation of artificial life and coming robot arms-race — we see how technology is advancing fast, wild and increasingly risky.

For the present — sensor data transmission, why we cannot control AI, AI economic dangers and social media problem in selling ads to racists — it is important to stop being reactive when it comes to our digital future.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Issue from the Past –

Stanislav Petrov: The man who may have saved the world -

“Thirty years ago, on 26 September 1983…[in] early hours of the morning, the Soviet Union’s early-warning systems detected an incoming missile strike from the United States…[and] protocol for the Soviet military would have been to retaliate with a nuclear attack of its own…[but Petrov] decided not to report them to his superiors, and instead dismissed them as a false alarm…[and] a dereliction of duty….

In the political climate of 1983, a retaliatory strike would have been almost certain….The system was telling him that the…reliability of that alert was ‘highest’…[with] no doubt. America had launched a missile….[Instead] Petrov called the…army’s headquarters and reported a system malfunction.

If he was wrong, the first nuclear explosions would have happened minutes later…[and] admits he was never absolutely sure that the alert was a false one…..[He] was the only officer in his team who had received a civilian education…[so] if somebody else had been on shift, the alarm would have been raised.” http://www.bbc.com/news/world-europe-24280831

Issues from the Future –

BabyX v3.0 Interactive Simulation (1.75 min. video)

Is BabyX the Future of Silicon-Based Life Forms? -

“[In] simulated artificial life form, or animat…BabyX is [project]…simulating the neural machinery of an infant human…brings a host of moral and philosophical questions regarding artificial life: What are our duties and obligations…[and] what if any legal status will they possess?….

[BabyX] is a computer generated psychobiological simulation…[with] some form of reinforcement learning being used for acquiring skills like playing the piano….[T]he graphics used for modeling…BabyX can be so spellbinding, that the difficult mathematics behind their brain circuitry gets brushed aside….

[From] video on BabyX, it appears the animat possesses many of the neural correlates of a human, including an artificial dopamine system and other pleasure-releasing brain structures.” https://medium.com/extremetech-access/is-babyx-the-future-of-silicon-based-life-forms-d6e077d7fb0c

Commentary: The coming robot arms race -

“[Russia’s] military exercise is underway…[but] in 2021, those troops might be sharing their battle space with…self-driving drones, tanks, ships and submersibles…without a guiding human hand…a truly revolutionary shift…[every] nation wants….

Critics have long feared countries might be more willing to go to war with unmanned systems…control might pass beyond human beings altogether….Musk has long warned…of some cataclysmic errors when it comes to artificial intelligence…[and] the development of autonomous weapons…[creates] devastating arms race….

’[The] leader in this will become ruler of the world,’ Putin [said]….[China] believed by some [to]…be the global leader [in]…developing autonomous swarms of drones…to fly themselves independently…[and] may be able to make their own tactical decisions…fight their own aerial dogfights….[without] direct supervision at all….

’Radical technological change begets radical government policy ideas’…[and] AI arms race could prove as revolutionary as the invention of nuclear weapons….[AI] could dramatically increase the efficiency of surveillance technology…[that’s] terrifying, particularly in the hands of a state with little…democratic oversight.

[B]y 2030, technological breakthroughs — not just AI, but quantum computing and beyond — would produce entirely unpredictable changes. Special force teams…[might] have a robotic and artificial intelligence component deployed alongside them….

Most countries deliberately keep their defense AI secret, ultimately fueling the arms race….Russia has long [trusted]…machines more than people…[but] Facebook shut down an AI experiment after programs involved began communicating with each other in a language the humans monitoring them could not understand….

Such technology is coming…[and] even relatively old military equipment increasingly can be retrofitted….Even if mankind can avoid a nuclear apocalypse…coming AI and robotic revolution may prove an equal existential challenge.”

https://uk.reuters.com/article/us-apps-robots-commentary/commentary-the-coming-robot-arms-race-idUKKCN1BT1XN

Issues for the Present –

A clever way to transmit data on the cheap -

“[A] technology called ‘LoRa’ (from ‘long range’)…allows computers to talk to each other with radio waves…[but] not easily blocked by walls, furniture and other obstacles…because LoRa uses lower-frequency radio waves than Wi-Fi…[and] make use of a technique called ‘chirp spread modulation’…[making] even faint LoRa signals easy to distinguish from background noise…[by] modulating it….

[Chips] consume almost no power at all…by choosing to earth its tiny aerial, or not, millions of times every second. When the aerial is earthed, part of the carrier wave will be absorbed. When it is not, it will be reflected…with the whole process controlled by three tiny, and thus very frugal, electronic switchesmade for less than 20 cents apiece.

The signals they generate can be detected at ranges of hundreds of metres. Yet with a power consumption of just 20 millionths of a watt, a standard watch battery should keep them going a decade or more. In fact, it might be possible to power them from ambient energy….[T]he chips are slow, transmitting data….

[Already] incorporated the chips into contact lenses and a skin patch. In hospitals, the chips could help track everything…[and] making their way into disposable drug-delivery devices that notify patients via their phones when their medication is running low.”

https://www.economist.com/news/science-and-technology/21728866-long-range-frugal-new-chip-could-be-just-what-smart-city-needs-clever-way

Who is to blame for algorithmic outrage?

“[T]he ability to target advertising to unsavory groups generated or suggested by major internet companies. ‘Jew haters’? There aren’t many…Facebook’s ad backend said. Try adding ‘jews in an oven’ to broaden your reach, suggested Google. ‘Nazi’ could engage 18.6 million users, says Twitter….

[Alerted] the companies’ responses all struck the same notes: ‘this is against our rules, we have no idea how it happened’…[yet] perfectly happy to make money from advertising targeted at groups like ‘Hitler did nothing wrong’….How was it none of them, with their thousands of employees…saw this coming?

[Seems] these companies were unwilling to institute restrictions on the parts that make them money….And let’s not pretend this is the only such abuse…where they stand to gain — Facebook sold $100K (or 5 million rubles) worth of political ads to a Russian bot net….If they’re going to trumpet their leadership in and dedication to principles of openness and inclusivity, it is incumbent on them to carry it out with maximum transparency.

https://techcrunch.com/2017/09/17/who-is-to-blame-for-algorithmic-outrage/

AI: Scary for the Right Reasons -

“[Any] technology can be used for good and bad…[and] much of the public discourse [reflect]…AI’s gone wrong (a scenario certainly worth discussing)…[but] before AI goes uncontrollable or takes over jobs, there [is]…larger danger: AI in the hands of governments and/or bad actors used to push self-interested agendas against the greater good….

With AI, the vast majority of current jobs may be dislocated regardless of skill or education level…[so] it is possible that emotional labor will remain the last bastion of skills that machines cannot replicate….

[There’s] an economic war going on between nations that is more threatening…[and] likely to get exponentially worse when AI is a factor in….[that] will further concentrate global wealth…and ‘cause’ the need for very different international approaches to development, wealth and disparity….

Capitalism is by permission of democracy, and democracy should have the tools to correct for disparity. Watch out Tea Party, you haven’t seen the developing hurricane heading your way….

[Also, we’ve] seen the integrity of our political system threatened by Russian interference and our global financial system threatened…[and] AI will dramatically escalate these incidents…as rogue nations and criminal organizations…press their agendas….

Imagine an AI agent…could unleash a locust of intelligent bot trolls [that]…destroy the very notion of public opinion….[This] has a strong chance of becoming a reality in the next decade….

[AI] is already on the radar of the authoritarian countries….Putin has talked about how AI leaders will rule the world….[China’s] accumulation of expertise [and]…very large funding…could create large power inequality….This disregard for data privacy and one way transfer of technology will lend…countries like China and Russia a huge advantage….

[We] need to rethink capitalismbecause efficiency will matter less, or…disparity will matter more….The biggest concern [is]…AI will dramatically worsen today’s cyber security issues and be less verifiable than nuclear technology…[so] dictators like Putin will have massively amplified clandestine power….

Not taking risks here might inadvertently be the largest risk we take.” https://hackernoon.com/ai-scary-for-the-right-reasons-185bee8c6daa

To control AI, we need to understand more about humans -

“[O]n the cusp of…ever-more-powerful artificial intelligence, the urgency of developing ways to ensure our creations always do what we want them to do is growing…not just how artificial intelligence works, but how humans work.

Humans are the most elaborately cooperative species…[and] outflank every other animal in cognition and communication — tools that have enabled a division of labor and shared living…[with] our market economies and systems of government….Humans are also the only species to have developed ‘group normativity’…[as] system of rules and norms [for]…what is collectively…not acceptable…[punished] with prisons and courts…[or] criticism and exclusion….

[With] AIs exercising free will…[we’re] worried about is whether or not they will continue to play by and help enforce our rules….

But our complex normative social orders are less about ethical choices than they are about the coordination of billions of people making millions of choices on a daily basis….How that coordination is accomplished is something we don’t really understand.

Culture is a set of rules, but what makes it change…is something we have yet to fully understand. Law is another set of rules that we can change simply in theory but less so in reality….

AIs will only be able to integrate into our elaborate normative systems if they are built to read, and participate in, that system…[focusing] on the features of a system…[as] normative systems.

Are we prepared for AIs that start building their own normative systems — their own rules about what is acceptable and unacceptable for a machine to do…[as] basis for predicting what other machines will do. We have already seen AIs that surprise their developers by creating their own language to improve their performance on cooperative tasks….

[For] machines that follow the rules that multiple, conflicting, and sometimes inchoate human groups help to shape, we will need to understand a lot more about what makes each of us willing to do that, every day.https://techcrunch.com/2017/09/13/to-control-ai-we-need-to-understand-more-about-humans/

Find more of my ideas on Medium at, A Passion to Evolve.

Or click the Follow button below to add me to your feed.

Prefer a weekly email newsletter — no ads, no spam, & never sell the list — email me dochuston1@gmail.com with “add me” in the subject line.

May you live long and prosper!
Doc Huston

--

--

Doc Huston
A Passion to Evolve

Consultant & Speaker on future nexus of technology-economics-politics, PhD Nested System Evolution, MA Alternative Futures, Patent Holder — dochuston1@gmail.com