News — At The Edge — 6/3

Each week I’m amazed at the issues that coalesce below the surface of mainstream news. This past week, beyond one article related to nuclear war, the growing daily impact of artificial intelligence dominated.

As much as AI affords us great benefits, the fact is we’re in the process of undertaking a social experiment of unparalleled scope and scale within an incredibly brief timespan.

Setting aside concerns for artificial general intelligence (AGI) and the so-called intelligence explosion, the diversity of applications, range of errors, missteps and dangerous consequences that can result and lack sufficient public attention is immense.

Two of my Medium articles speak to risks associated with AI — Artificial Intelligence (AI) Community Is Playing A Risky Game With Us and Why You Should Fear Artificial Intelligence-AI

Also, I have a new article you might find interesting, What Happens After Trump & Pence?

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Artificial Intelligence Issues —

Artificial Intelligence Owes You an Explanation -

“[Most] autonomous devices and A.I.-powered programs — including personal A.I. assistants…self-driving cars…and smart appliance…provide little to no transparency for their decisions….

Whether through user error, poor design [etc.]…technology can make suspect decisions….[like] Google Home conspiracy theories, or…Tay, Microsoft’s A.I. [racism]….[A]s A.I. programs and autonomous devices…expand into decisions that have more serious consequences…affecting justice, health, well-being, and…life and death, the stakes become much higher….

Enter the right to an explanation [that]…require that autonomous devices and programs tell consumers how the A.I. reached a decision…to ensure [they]…do not enable discrimination, exert hidden political pressures, and engage in…unfair or illegal business practices…[and] providing consumers with personalized, easy to understand algorithmic transparency….

In 2016, the European Parliament…[adopted] major changes to how companies handle the personal data they gather about EU-based consumers…they are required to inform individuals whether ‘automated decision-making, including profiling’ is involved in processing that data and provide them with ‘meaningful information about the logic involved’ with that processing….

[I]n the EU, A.I. owes you an explanation every time it uses your personal data to choose a particular recommendation or action…to evaluate and potentially correct these kinds of decisions and…to see how their personal data is used to generate results….[So] ‘explanation would, at a minimum, provide an account of how input features relate to…[and] play the largest role in prediction?’….

[Users] should be able to review their personal data and how A.I. relies on it…[and] be able to provide guidance to A.I., correcting mistaken data and telling it which personal data is more important than others…and adjust the preset priorities

[So] the right to an explanation also functions as a data protection tool much more powerful than…in the United States, such as laws that require individuals to consent before a third party can use or disclose their data and that require businesses to notify users if their data may have been breached by hackers….

[T]he right to an explanation lets you see how the information you’re handing over is used in context and can be used to grant you greater control of what you want to input and how it’s processed….[Many] ‘decisions are hard to explain, even with full access to the algorithm. Trade secrets and intellectual property could be at stake’…[so] potentially onerous requirements….

[The] recent lack of concern for constituents’ personal data — make it unlikely that Congress or state legislatures would adopt a similar right to an explanation anytime soon.” http://www.slate.com/articles/technology/future_tense/2017/05/why_artificial_intelligences_should_have_to_explain_their_actions.html?wpsrc=sh_all_dt_tw_top

There are bots. Look around. —

“The marketplace of ideas…struggling with the increasing incidence of algorithmic manipulation and disinformation campaigns…[and] very similar happened in finance with the advent of high-frequency trading…used to distort information flows…[with] lessons we can take from that….

Efficiency, Technology and Manipulation in Finance….Algorithmic trading began [1980s]…[and] eliminated the gatekeepers: human order-routers (brokers) and human matching engines [‘specialists’]…[so] now computers monitor the market and decide what to trade…[narrowing] spreads [so]..prices are consistent even across exchanges and geographical boundaries….

[This] increasing efficiency…[until] unanticipated feedback loops, bad code, and strange algorithmic interactions lead to steep dives or spikes in stock prices…flash crashes [sent]…shockwaves through the market globally, impacting all asset types across all exchanges; the entire system is thrown into chaos while people try to sort out what’s going on.

So, while automation has been a net positive for the market, that side effect — fragility — negatively impacts and erodes trust in the health of the entire system…[because] ‘high-frequency trading’ is about as precise as ‘fake news’….

Some HFT strategies…increase liquidity and improve price discovery. But others are very harmful…[and] involve intentional, deliberate, and brazen market manipulation…[enticing] other market participants [and]…algorithms — to respond in a way that benefits…the manipulation strategy….And in the early days of HFT, slimy people could do bad things with relative ease.

Efficiency, Technology and Manipulation in Ideas….[W]e eliminated gatekeepers in [media]…[so] anybody could play…[and] social platforms became idea exchanges….

If a crowd [tries]….the systems are phenomenally easy to game…[because] crowd doesn’t have to be real people, and the content need not be true or accurate…[giving] us manipulative bots and an epidemic of ‘fake news’…an imprecise term [for]…disingenuous content: clickbait, propaganda, misinformation, disinformation, hoaxes, and conspiracy theories….

Disinformation campaigns are [worst]…because the goal [is]…to introduce fear, uncertainty, and doubt…[and] disseminated strategically…outside of the mainstream press…[which] resonates better with the media-distrusting folks…[then] spread via mass coordinated action, supplemented by bot networks and sockpuppets (‘fake people’).

Once it trends, it’s nearly impossible to debunk. Social networks enable malicious actors to operate at platform scale….Bots and sockpuppets can be used to manipulate conversations, or to create the illusion of a mass groundswell of grassroots activity, with minimal effort…to manipulate ratings or recommendation engines, to create networks of sockpuppets with the goal of subtly shaping opinions, preying on proximity bias and confirmation bias….

[T]he social web is phenomenally easy to game because the…platforms all have the same business model: eyes and ads…[and] no cross-platform policing of malicious actors happening….

[Before] November 2016, there was no public acknowledgement by Twitter, Facebook, or Google that there even was a problem …[saying] it was too difficult to rein in [and]…chose to pretend that algorithmic manipulation was a nonissue, so…bore no responsibility for the downstream effects…[despite] a profound erosion of trust in social networks.

Markets can’t function without trust….But unlike in finance, it’s no one’s job to be looking at [or]…regulate this. So now what?….[First] governments are often ill-suited to regulate new technologies…[Second] tech industry leadership…has not gone out of its way to prevent [bot abuse]….

Europe is already talking about regulation, tech employees are staging internal mutinies, and one of Twitter’s founders is acknowledging a ‘broken’ internet. Things are going to change….[Perhaps] SRO (self-regulatory organization)…[might] avoid systemic problems that…wipe out trust…[that] negatively impacts the three things businesses care most about: top line revenue, downstream profit, and mitigating risk….

Here’s a new corollary: they have to be real users.

But the moral case [is]…[this] impacts our democracy and shapes our lives increasingly….We’re heading down the path of an arms race in algorithmic manipulation, in which every company, political party, activist group, and candidate is going to feel compelled to leverage these strategies.

We’re at an inflection point, and only Big Tech has the power to reset the playing field. And in the meantime, the marketplace of ideas is growing increasingly inefficient as unchecked manipulation influences our most important conversations.” https://www.ribbonfarm.com/2017/05/23/there-are-bots-look-around/

Pushy AI Bots Nudge Humans to Change Behavior —

“When people work together on a project, they often come to think they’ve figured out the problems in their own respective spheres…[but] what if adding artificial intelligence…in the form of…bot, could actually make people in groups more productive?….

[Bots] making random…choices about 10 percent of the time — made all the difference…[but] much noisier than that, the benefit soon vanished…[and] varied depending on whether it was…center of a network with lots of neighbors or on the periphery….

[Bot] mistake — improved a group’ s performance…[because] conflict served ‘to nudge…humans to change their behavior in ways that appear to have further facilitated a global solution…[as] humans began to play the game differently’….[Just as] random mutations can help a species sidestep extinction….[The] ability to minutely track ‘how humans and algorithms collectively make decisions [is]…the future of quantitative social science’….

Many people are already accustomed to talking with a computer…likely to turn up in crowdsourcing applications…[and] could also be useful in social media — to discourage racist remarks, for example….

[T]he opposite concern is that mixing humans and machines [in]…group decision-making could enable businesses — or bots — to manipulate people.” https://www.scientificamerican.com/article/pushy-ai-bots-nudge-humans-to-change-behavior/?wt.mc=SA_Twitter-Share

To err is algorithm: Algorithmic fallibility and economic organization

“[Below] surface of some of today’s biggest tech controversies…[is] algorithm misfiring…[like] hate speech…violent videos…[directing] people looking for information about the Holocaust to neo-Nazi websites…[and] their failures have costs….

For an economist…question is how much value will the algorithm create with its decisions…[which] depends on two factors: its accuracy (the probability [of]…a correct decision), and the balance between the reward from a correct decision and the penalty from an erroneous one.

Riskier decisions (where penalties are big compared to rewards) should be made by highly accurate algorithms…[and/or use] human supervisors to check the decisions made by the algorithm and fix any errors…[from] wrong decisions….

[When] scale-up the number of [decisions]…[do] algorithms gain or lose accuracy…[has] two interesting races going on….

  1. a race between an algorithm’s ability to learn from the decisions it makes, and the amount of information that it obtains from new decisions. To make things worse, when an algorithm becomes very popular (makes more decisions), people have more reasons to game it…[and] no matter how much more data you collect…[it’s] impossible to make perfect predictions about a complex, dynamic reality….
  2. race is between the data scientists creating the algorithms and the supervisors checking these algorithm’s decisions....[Thus] accuracy and the increase in labor costs…limit the number of algorithmic decisions an organization can make economically….

[So] many interesting organizational and policy implications….

  1. Finding the right algorithm-domain fit….[So] organizations in ‘low-stakes’ environments can experiment with new and unproven algorithms….
  2. There are limits to algorithmic decision-making in high stakes domains…where the penalties from errors are high, such as health or the criminal justice system, and when dealing with groups who are more vulnerable to algorithmic errors…unless they are complemented with expensive human supervisors who can find and fix errors. This will create natural limits to algorithmic decision-making….
  3. The pros and cons of crowdsourced supervision….YouTube, Facebook and Google have all done this in response to their algorithmic controversies….
  4. Why you should always keep a human in the loop…[as] a buffer against sudden declines in performance if (as) the accuracy of algorithms decreases…[and] letting everyone know that there is a problem…[especially] where errors create penalties with a delay, or penalties that are hard to measure or hidden…[like]inability to discriminate between real news and hoaxes creates costs for society, potentially justifying stronger regulations….
  5. From abstract models to real systems Before we use economic models…need to define and measure model accuracy, penalties and rewards, changes in algorithmic performance due to environmental volatility, levels of supervision and their costs….

[Coda] ‘with every passing year, economics must become more…about the design of the machines that mediate human social behavior [and]…guides people in a more direct, detailed and literal way than does policy. Another way to put it is that economics must turn into a large-scale, systemic version of user interface design.’” http://www.nesta.org.uk/blog/err-algorithm-algorithmic-fallibility-and-economic-organisation

Nuclear War Issue—

Why 3,000 Scientists Think Nuclear Arsenals Make Us Less Safe -

“[UN] member states are gathering…to negotiate a nuclear weapons ban, and…[3,000] scientists from 84 countries have signed an open letter in support…[though] Iran has no nukes and North Korea lacks missiles capable of reliably delivering their dozen or so Hiroshima-scale bombs….

[A] nuclear war between the superpowers…[is] scenario most likely to kill you…[with] 14,000 nuclear weapons…[some] hundreds of times more powerful than North Korea’s and those dropped on Japan…[all] ready launch on minutes notice…[and] risk of nuclear winter….

Years of near-freezing summer temperatures would eliminate most of our food production….thousands of Earth’s largest cities were reduced to rubble and global infrastructure collapsed [so]…whatever small fraction of all humans didn’t succumb to starvation, hypothermia or epidemics would probably need to cope with roving, armed gangs desperate for food….

[I]f one of the two superpowers were able to launch its full nuclear arsenal against the other without any retaliation…nuclear winter might still assure the attacking country’s self-destruction….

[A] limited nuclear exchange between India and Pakistan could cause enough cooling and agricultural disruption to endanger up to 2 billion people, mostly outside the warring countries….

[Thus] nuclear powers…endanger everyone else without asking their permission…[and] exacerbated by a seemingly endless series of near-misses in which nuclear war has come close to starting by accident…[or] malfunctioning early warning-system in a nation that they are not threatening….

Rather than disarming, the U.S. and Russia have recently announced massive investments in novel nuclear weapons….U.S. plans to spend a trillion dollars replacing most of its nuclear weapons with new ones that are more effective for a first strike….[So] ‘probability of a nuclear calamity is higher today [than]…it was during the cold war’….

[Globally] most governments are frustrated that a small group of countries with a minority of the world’s population insist on retaining the right to ruin life on Earth for everyone…[yet] won’t give them up [unless]…pressured into doing so by the majority of the world’s nations and citizens.” https://blogs.scientificamerican.com/observations/why-3-000-scientists-think-nuclear-arsenals-make-us-less-safe/

Find more of my ideas on Medium at, A Passion to Evolve.

Or click the Follow button below to add me to your feed.

Prefer a weekly email newsletter — no ads, no spam, & never sell the list — email me dochuston1@gmail.com with add me in the subject line.

May you live long and prosper!
Doc Huston

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Doc Huston

Doc Huston

Consultant & Speaker on future nexus of technology-economics-politics, PhD Nested System Evolution, MA Alternative Futures, Patent Holder — dochuston1@gmail.com