PODCAST

Rosie Campbell on responsible research and publication norms in AI

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

When OpenAI developed its GPT-2 language model in early 2019, they initially chose not to publish the algorithm, owing to concerns over its potential for malicious use, as well as the need for the AI industry to experiment with new, more responsible publication practices that reflect the increasing power of modern AI systems.

This decision…


PODCAST

Jakob Foerster on AI-powered killer drones

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

Automated weapons mean fewer casualties, faster reaction times, and more precise strikes. They’re a clear win for any country that deploys them. You can see the appeal.

But they’re also a classic prisoner’s dilemma. …


PODCAST

Nicolas Miailhe on the case for global coordination on AI

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

In December 1938, a frustrated nuclear physicist named Leo Szilard wrote a letter to the British Admiralty telling them that he had given up on his greatest invention — the nuclear chain reaction.

The idea of a nuclear chain reaction won’t work. There’s no need to keep this patent secret, and indeed there’s no need…


PODCAST

Yan Li shares lessons learned from using technology for good around the world

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

We’ve recorded quite a few podcasts recently about the problems AI does and may create, now and in the future. We’ve talked about AI safety, alignment, bias and fairness.

These are important topics, and we’ll continue to discuss them, but I also think it’s important not to lose sight of the value that AI and…


PODCAST

Ryan Carey on the quest to understand the incentives of AI systems

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

AI safety researchers are increasingly focused on understanding what AI systems want. That may sound like an odd thing to care about: after all, aren’t we just programming AIs to want certain things by providing them with a loss function, or a number to optimize?

Well, not necessarily. It turns out that AI systems can…


PODCAST

Melanie Mitchell on the reasons why superhuman AI might not be around the corner

APPLE | GOOGLE | SPOTIFY | OTHER

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

As AI systems have become more powerful, an increasing number of people have been raising the alarm about its potential long-term risks. As we’ve covered on the podcast before, many now argue that those risks could even extend to the annihilation of our species by superhuman AI systems that are slightly misaligned with human values.


PODCAST

Josh Fairfield on regulating intelligence and emerging technologies

APPLE | GOOGLE | SPOTIFY | OTHERS

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds.

Powered by Moore’s law, and a cluster of related trends, technology has been improving at an exponential pace across many sectors. AI capabilities in particular have been growing at a dizzying pace, and it seems like every year brings us new breakthroughs that would have been unimaginable just a decade ago.


PODCAST

Stuart Armstrong on humanity’s far future and how things might go amazingly well (or terribly badly)

APPLE | GOOGLE | SPOTIFY | OTHERS

Paradoxically, it may be easier to predict the far future of humanity than to predict our near future.

The next fad, the next Netflix special, the next President — all are nearly impossible to anticipate. That’s because they depend on so many trivial factors: the next fad could be triggered by a viral video someone filmed on a whim, and well, the same could be true of the next Netflix special or President for that matter.

But when it comes to predicting the far future of humanity, we might oddly be on more…


PODCAST

Georg Northoff explains how a good theory of consciousness could lead to better AI

APPLE | GOOGLE | SPOTIFY | OTHERS

For the past decade, progress in AI has mostly been driven by deep learning — a field of research that draws inspiration directly from the structure and function of the human brain. By drawing an analogy between brains and computers, we’ve been able to build computer vision, natural language and other predictive systems that would have been inconceivable just ten years ago.

But analogies work two ways. Now that we have self-driving cars and AI systems that regularly outperform humans at increasingly complex tasks, some are wondering whether reversing the usual approach —…


Podcast

Ethan Perez explains how AI debate could get us to superintelligence — safely

To select chapters, visit the Youtube video here.

Editor’s note: This episode is part of our podcast series on emerging problems in data science and machine learning, hosted by Jeremie Harris. Apart from hosting the podcast, Jeremie helps run a data science mentorship startup called SharpestMinds. You can listen to the podcast below:

APPLE | GOOGLE | SPOTIFY | OTHERS

Most AI researchers are confident that we will one day create superintelligent systems — machines that can significantly outperform humans across a wide variety of tasks.

If this ends up happening, it will pose some potentially serious problems. Specifically: if…

Jeremie Harris

Co-founder of SharpestMinds, host of the Towards Data Science podcast. ⚛︎ Physics | 🤖 Machine learning | 🤔 Philosophy | 🚀 Startups.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store