By Neel V. Patel
On the outside, Mars is a cold, barren hellscape. But beneath the surface, it is teeming with quakes and other geological activity.
That’s the story we’re learning from the first results from NASA’s InSight mission, published across several papers in Nature Geoscience (and one in Nature Communications). InSight is a lander that’s been perched on the surface of Mars since November 2018 at a location known as Elysium Planitia.
“We finally have established that Mars is a seismically active planet,” says Bruce Banerdt, the principal investigator for InSight. Its seismic activity is greater than the moon’s, but less than Earth’s. …
By Oren Etzioni
Could we wake up one morning dumbstruck that a super-powerful AI has emerged, with disastrous consequences? Books like Superintelligence by Nick Bostrom and Life 3.0 by Max Tegmark, as well as more recent articles, argue that malevolent superintelligence is an existential risk for humanity.
But one can speculate endlessly. It’s better to ask a more concrete, empirical question: What would alert us that superintelligence is indeed around the corner?
We might call such harbingers canaries in the coal mines of AI. If an artificial-intelligence program develops a fundamental new capability, that’s the equivalent of a canary collapsing: an early warning of AI breakthroughs on the horizon. …
By Gideon Lichfield
Google’s most advanced computer isn’t at the company’s headquarters in Mountain View, California, nor anywhere in the febrile sprawl of Silicon Valley. It’s a few hours’ drive south in Santa Barbara, in a flat, soulless office park inhabited mostly by technology firms you’ve never heard of.
An open-plan office holds several dozen desks. There’s an indoor bicycle rack and designated “surfboard parking,” with boards resting on brackets that jut out from the wall. Wide double doors lead into a lab the size of a large classroom. …
By David Rotman
Gordon Moore’s 1965 forecast that the number of components on an integrated circuit would double every year until it reached an astonishing 65,000 by 1975 is the greatest technological prediction of the last half-century. When it proved correct in 1975, he revised what has become known as Moore’s Law to a doubling of transistors on a chip every two years.
Since then, his prediction has defined the trajectory of technology and, in many ways, of progress itself.
Moore’s argument was an economic one. Integrated circuits, with multiple transistors and other electronic devices interconnected with aluminum metal lines on a tiny square of silicon wafer, had been invented a few years earlier by Robert Noyce at Fairchild Semiconductor. Moore, the company’s R&D director, realized, as he wrote in 1965, that with these new integrated circuits, “the cost per component is nearly inversely proportional to the number of components.” It was a beautiful bargain — in theory, the more transistors you added, the cheaper each one got. Moore also saw that there was plenty of room for engineering advances to increase the number of transistors you could affordably and reliably put on a chip. …
By Tim Maughan
Bruce Sterling wasn’t originally meant to be part of the discussion. It was March 13, 2010, in Austin, Texas, and a small group of designers were on stage at the South by Southwest interactive festival, talking about an emerging discipline they called “design fiction.”
“They asked me to join the panel at the last minute,” Sterling tells me, laughing. “They knew that I’d been [involved with] South by Southwest for a long time and this would give them some cred.”
A science fiction novelist who’d helped launch the cyberpunk movement in the 1980s, Sterling had actually coined the term design fiction in a 2005 book, but he hadn’t exactly taken ownership of the still-nebulous concept. What happened that day made it much clearer, though, and set off an explosion of ideas for everyone in attendance. …
By Tanya Basu
For six hours, a circular robot flits up and down a wall, sketching out a lotus with myriad intricate designs embedded in each petal. Four marker pens color in the designs. It looks beautiful. But as soon as it’s complete, the robot reverses course, erasing the image and leaving the wall as if it had never been there.
This is a mandala, reimagined. These complex patterns are meant to reflect the visions that monks see while meditating about virtues such as compassion, wisdom, and more, says Tenzin Priyadarshi, a Buddhist monk and the CEO of the Dalai Lama Center for Ethics and Transformative Values at MIT. To automate the elaborate process of creating and destroying them, an important tradition in Buddhism, Priyadarshi teamed up with Carlo Ratti, an MIT architect and designer of Scribit, a $500 “write and erase robot” that uses special markers to draw and erase art on a wall. …
By Antonio Regalado
The world is watching with alarm as China struggles to contain a dangerous new virus, now being called SARS-CoV-2. It has quarantined entire cities, and the US has put a blanket ban on travellers who’ve been there. Health officials are scrambling to understand how the virus is transmitted and how to treat patients.
But in one University of North Carolina lab, there’s a different race. Researchers are trying to create a copy of the virus. From scratch.
Led by Ralph Baric, an expert in coronaviruses — which get their name from the crown-shaped spike they use to enter human cells — the North Carolina team expects to recreate the virus starting only from computer readouts of its genetic sequence posted online by Chinese labs last month. …
By Brian Bergstein
In less than a decade, computers have become extremely good at diagnosing diseases, translating languages, and transcribing speech. They can outplay humans at complicated strategy games, create photorealistic images, and suggest useful replies to your emails.
Yet despite these impressive achievements, artificial intelligence has glaring weaknesses.
Machine-learning systems can be duped or confounded by situations they haven’t seen before. A self-driving car gets flummoxed by a scenario that a human driver could handle easily. An AI system laboriously trained to carry out one task (identifying cats, say) has to be taught all over again to do something else (identifying dogs). In the process, it’s liable to lose some of the expertise it had in the original task. …
By Karen Hao
Every year, OpenAI’s employees vote on when they believe artificial general intelligence, or AGI, will finally arrive. It’s mostly seen as a fun way to bond, and their estimates differ widely. But in a field that still debates whether human-like autonomous systems are even possible, half the lab bets it is likely to happen within 15 years.
In the four short years of its existence, OpenAI has become one of the leading AI research labs in the world. It has made a name for itself producing consistently headline-grabbing research, alongside other AI heavyweights like Alphabet’s DeepMind. …
By Angela Chen and Karen Hao
Perhaps you’ve heard of AI conducting interviews. Or maybe you’ve been interviewed by one yourself. Companies like HireVue claim their software can analyze video interviews to figure out a candidate’s “employability score.” The algorithms don’t just evaluate face and body posture for appearance; they also tell employers whether the interviewee is tenacious, or good at working on a team. These assessments could have a big effect on a candidate’s future. In the US and South Korea, where AI-assisted hiring has grown increasingly popular, career consultants now train new grads and job seekers on how to interview with an algorithm. …
About