History and Futurism, Machine Learning, and Death
Thoughtful Net #49. Interesting links from the past few weeks.
I’m going on holiday this week. Portugal, for eight days. Many people say they like to take a break from technology in their holidays, but I like to enhance my holidays with technology. Amongst other things, I’ve got a great list of food and drink spots in Lisbon in Google Maps, will be using a local transport app to find my way around the public transport of Setúbal province, have some new books on my eReader, and will of course be using my phone’s camera and Instagram to try to capture some memories for the future.
It won’t be business as usual; I’ll mostly not be using Twitter (or, at least, not engaging much), and won’t be on email except for emergencies. But I don’t see technology as a negative force in my day-to-day life, and won’t treat it differently when I’m away.
Estou de férias. Look after the place while I’m gone.
What We Get Wrong About Technology. Tim Harford, ‘the undercover economist’, on how technology prediction often misses the small but fundamental changes — what he terms ‘the toilet paper principle’. Historically-informed futurology is my sweet spot.
Forecasting the future of technology has always been an entertaining but fruitless game. But history can teach us something useful: not to fixate on the idea of the next big thing, the isolated technological miracle that utterly transforms some part of economic life with barely a ripple elsewhere.
Google, Mozilla, And The Race To Make Voice Data For Everyone. Steven Melendez on Mozilla’s Common Voice, a project aimed at providing a free, open dataset of human voice samples, required as we move into an era of voice assistants and natural interfaces.
One of the key reasons to have a large and wide variety of voice samples is so that the algorithms that are trained on it avoid having an unintended bias. As anyone with a heavy accent who has tried to use a voice assistant can attest, these systems are still better at understanding plain English than anything else.
Machines Taught by Photos Learn a Sexist View of Women. Tom Simonite looks at gender bias in the freely available datasets used to train computer vision algorithms. Biased data is a more pernicious and immediate threat than super-intelligent killer AI.
Machine-learning software trained on the datasets didn’t just mirror those biases, it amplified them. If a photo set generally associated women with cooking, software trained by studying those photos and their labels created an even stronger association.
How YouTube Perfected the Feed. Casey Newton on YouTube’s transformation by a Facebook-style feed, and the ‘Google Brain’.
Integrating Brain has had an immense impact: more than 70 percent of the time people spend watching videos on the site is now driven by YouTube’s algorithmic recommendations. Each day, YouTube recommends 200 million different videos to users, in 76 languages. And the aggregate time people spend watching videos on YouTube’s home page has grown 20 times larger than what it was three years ago.
How Hate Groups Forced Online Platforms to Reveal Their True Nature. The consistently excellent John Herrman on how the big platforms stated commitment to neutrality and free speech has been tested and exposed by nationalist hate groups.
These companies promised something that no previous vision of the public sphere could offer: real, billion-strong mass participation; a means for affinity groups to find one another and mobilize, gain visibility and influence. This felt and functioned like freedom, but it was always a commercial simulation.
Fighting Neo-Nazis and the Future of Free Expression. The Electronic Frontier Foundation on the dangers posed by platforms that have the power to censor online conversation without accountability. By Jeremy Malcolm, Cindy Cohn, and Danny O’Brien.
Internet intermediaries… control so much online speech, the consequences of their decisions have far-reaching impacts on speech around the world. Every time a company throws a vile neo-Nazi site off the Net, thousands of less visible decisions are made by companies with little oversight or transparency.
The First Social Media Suicide. Beautiful and heartbreaking piece by Rana Dasgupta describing the suicide of an isolated young woman in France, in the age of live streaming video. Caution: as you might expect, contains hard-hitting description of suicide.
More than a thousand were now watching, and most had missed her message. The mood was raucous: people joked about her appearance and expressed lewd anticipation about what she might do, even as others begged them to stop the stream of comments in the hope she might finish what she was trying to say.
Hard Questions: What Should Happen to People’s Online Identity When They Die?. Explaining the moral quandaries of handling death on an online platform; more important than ever as we move to a fully globally-connected world. Slightly dry but very interesting piece from Monika Bickert, Facebook’s Director of Global Policy Management.
If a bereaved spouse asks us to add her as a friend to her late husband’s profile so she can see his photos and posts, how do we know if that’s what her husband would have wanted? Is there a reason they were not previously Facebook friends? Does it mean something if she had sent him a friend request when he was alive and he had rejected it?
16 Ways QR Codes are Being Used in China. I love Connie Chan’s articles on China; great insights into very different ways of living in an online culture.
We’ve talked a lot about the rise of QR codes in Asia, but they may now finally be moving from being a “joke” to being more widely adopted in other places as well. Simply put, QR codes let you hyperlink and bookmark the physical world.
Facebook Figured Out My Family Secrets, And It Won’t Tell Me How. Kashmir Hill looks into Facebook’s non-public social graph and the uncanny way it recommends new connections to you.
People generally are aware that Facebook is keeping tabs on who they are and how they use the network, but the depth and persistence of that monitoring is hard to grasp. And People You May Know, or “PYMK” in the company’s internal shorthand, is a black box.
The Thoughtful Net is an occasional (less than weekly, more than monthly) publication collecting great writing about the internet and technology, culture, information, society, science, and philosophy. If you prefer to receive it in your inbox you can follow this publication or subscribe to the email newsletter.