Will MacAskill: EA Global San Francisco 2018.09
My notes and questions on Will MacAskill’s 2018 EAG keynote: “How can effective altruism stay curious?”
A reminder of what effective altruism is all about: according to Wikipedia, “Effective altruism is a philosophy and social movement that uses evidence and reasoning to determine the most effective ways to benefit others.”
Will MacAskill — possibly my favorite ethicist for having written this Atlantic piece expounding a view which has always struck me as completely natural, yet seems largely eschewed by (at least) Western society — opened the 2018 San Francisco Effective Altruism Global (EAG)…
My father loved listening to the golden boys of the Rat Pack, especially Dean Martin. King of the Road is the song I recall best from an album that was played repeatedly in my childhood home. My father liked to have fun and openly shared his happiness (and sometimes sly self-satisfaction) when immersed in activities which pleased him. Playing or watching a sport he loved, conquering a challenge of some sort, basking in the fun of a classic comedy or musical, listening to his preferred music, sharing a glass of cognac with friends, appreciating the stars from a sailboat anchored…
Notes & questions on James Mickens’ talk at USENIX Security ‘18
If you haven’t seen James’ talk and have an hour to view his actual keynote, please do yourself a favor and just go see his talk. It’s entertaining, educational, and thought provoking. Absolutely not to be missed — the most fun you’ll have while thinking about cybersecurity, machine learning, and computing more generally. Case in point, a fun still from his talk…
While James reads a book outside a cafe in SF, a magician tries to entice James to watch him perform a magic trick – only to be…
My notes on Epicenter podcast: Andrew Trask: OpenMined — A Decentralised Artificial Intelligence Platform
TL;DR: Facilitated private machine learning: train ML models on users’ data without exposing that data or uploading/aggregating that data. How? A user downloads a model and trains it on their data. The training process yields an update to the model; that update is uploaded as a proposed improvement to the model. The user data is never exposed, aggregated with other data, or transferred beyond the user’s control.
My notes and questions on Will MacAskill’s 2018 TED talk: “What are the most important moral problems of our time?”
TED bio: “Will MacAskill is a cofounder of the effective altruism movement, a philosophy that tries to answer the question: How can we do as much good as possible?”
In terms of GDP vs time, all of human history was, on the scale below, flat until the hockey stick inflection point corresponding to the scientific and industrial revolution.
(I’m curious: Did GDP relevance change at that time also? I.e., prior to the S&I revolution, did GDP accurately reflect prosperity? What…
In studying AI, ML, and the general advance of technology — and observing the response to the hype that surrounds all that — I’m compelled to ask whether we’re fully considering our power in determining the nature of the influence of technology. Not from the perspective of regulating the technologies themselves, nor how they evolve (both of which are important considerations); rather, from the perspective of how we, as humans, will evolve to mitigate the inherent risks.
We’re complicit in the development of AI — artificial intelligence — that will not only work for us, but also watch us and…
— Caveat emptor: Written in November 2017. Progress in AI is astonishingly rapid & some of this may be out of date. —
Artificial intelligence (AI) in medical imaging is a controversial topic — up for debate are questions regarding its implementation (from both technical and regulatory standpoints), ultimate potential (presuming it has the theoretical potential to do better, will it do better in practice, and how might it alter clinical processes?), and impact on the future role of clinicians [1].
Google’s Geoffrey Hinton — an AI pioneer, now heading up the Vector Institute for Artificial Intelligence — once stated…
Privilege buys margin. Because it’s margin, the privileged aren’t aware of it and don’t feel it. It isn’t requested, nor penalized. It’s just there — a cushion to absorb missteps, errors in judgment, occasional (and, sometimes, frequent) foolishness. It allows for assuming the risks necessary to achieve the extraordinary.
The privileged aren’t able to feel the effects of its absence — but everyone else is painfully aware of the steep penalties of even the tiniest of mistakes. Or of toeing the line. Or, sometimes, just approaching the line.
Access to capital (margin) can be the difference between wealth and poverty…
Ultra high performers. As the living embodiment of a (local) performance maximum, they’re generally pretty easy to spot when working their magic.
If you happen to stumble across one, you know you want them on your team. You have the confidence to surround yourself with collaborators smarter than you, from whom you’ll learn and grow.
But is that how it actually goes?
When there’s a fire to put out, the 10xer might have an edge on your average solid performer. And the edge might be so significant as to render their contribution indispensable.
But putting out fires is a sign…