Watch Axios’ Ina Fried on Infinia ML’s Machine Meets World.

Axios Chief Technology Correspondent Ina Fried on Insider AI Ethics

Join Machine Meets World, Infinia ML’s ongoing conversation about AI

James Kotecki
Aug 25 · 23 min read

Episode Highlights

This week’s guest is Axios Chief Technology Correspondent Ina Fried.

“. . . some of the push for more fairness, more diversity, more explainability is coming from within these companies — so within Google, within Amazon, within Microsoft — and sometimes at the highest levels.”

“Are [algorithms] having the results we intended or are they actually making an unequal system, perpetually unequal? And in some cases, even worse. And again, it’s not casting aspersions on the motives of people except to say, hey, just adding an algorithm, doesn’t make anything inherently better.”

“I think critics of AI and critics of machine learning underestimate just how useful a good algorithm can be. A good algorithm with good training data can make society fairer, more just, can spot biases that humans didn’t even notice.”

“One of the pushes is you actually want to design an AI system to be explainable, which takes objective work. . . .By default, it would just be a black box and you’d get an answer. If you want an AI system that can explain why it came to the conclusion it did, you have to build that from start.”

Watch the show above. You can also hear Machine Meets World as a podcast, join the email list, and contact the show.

Audio + Transcript

James Kotecki:
And we are live from Infinia ML. I am James Kotecki, this is Machine Meets World. My guest today, very excited about this guest, the Chief Technology Correspondent from Axios and the author of the free daily newsletter Login, all about technology. Please welcome Ina Fried. Thank you so much for being here.

Ina Fried:
Thanks James. It’s good to be here.

James Kotecki:
I said please welcome as if there was going to be like an applause sound effect or something. I’m going to have to figure out how to add that in later.

Ina Fried:
Well, that’s like late night TV these days. They’re doing the same thing.

James Kotecki:
Exactly. We’re all trying to figure this out. I see you’ve got some Legos behind you. So obviously we’re both at home and we both talked before this interview that we both have our kids in the next room I think doing some schoolwork. So we’ll see how this plays out. Thanks for doing this. You are the chief technology correspondent from Axios. Let’s start broad with technology. Why are you interested in technology? Why do you like it?

Ina Fried:
Well, early on in my career, 20+ years ago, I was like, I could’ve written about anything and I enjoy courts and the legal process, but anything that people are passionate about. I think what makes technology extra interesting to cover is all the ways that it changes everyone’s life. And really for a long time now, it’s sort of trendy if you will to say, I don’t want to just cover the technology, but also the impact it’s having on people’s lives, how it’s changing how we live. That’s sort of the status quo right now. That’s what everyone who covers techs are doing. That’s always been where my passion lies is there’s so much change happening and it can be good. A lot of it is good. We’ve made incredible progress in my lifetime, but it isn’t inherently good. We have to ask the question of, is this something we want more of or less of because the tech industry is just going to do it because it can.

James Kotecki:
And how does that fit into the overall scope of what Axios is trying to do? Axios is kind of a unique singular voice in the world of media. And it has some very strong philosophical ideas about how things should be covered. And obviously it doesn’t just cover tech, right? There’s politics. There’s other kinds of news that you guys covered. There’s sciences. I don’t know if you guys do sports yet, but it seems like you might get there.

Ina Fried:
We do. We have a great issues on that.

James Kotecki:
Oh, you do. You do. Obviously I’m not as much of a sports fan, so I apologize. But in terms of tech, how does that fit into the overall Axios philosophy?

Ina Fried:
I think we approach tech the way we approach everything else, which is we want to have a subject matter expert and a team that really knows what they’re talking about, but really telling the rest of our audience here’s what really matters, here’s what you need to know. Our mantra, our tagline is Smart Brevity. The idea that we can get people smarter, faster, and we apply that as you say, we cover politics is probably what we’re best known for, but science, health, sports, space, cities, all of our subject matter experts. The idea is to really take that broad knowledge, but not use it to do long-winded essays about it, but really to be a filter to help people understand here’s what you really need to know. Here’s the thing that we’re going to be talking about a week or a month or a year from now. And I joined really early, not quite before we launched, but right after, within a couple of weeks after we launched in 2017 and it’s been an incredibly fun ride.

James Kotecki:
And how much does your personal viewpoint, how much is your personal viewpoint allowed to influence your coverage? Is that even the right way to ask that question, so to speak? You mentioned some concerns about technology and what it might do if left unchecked. Obviously that’s showing a little bit of, I don’t want to say bias, but certainly a lot of opinion in terms of how these things are covered.

Ina Fried:
Yeah. I mean, I would say that’s probably the line right there. I mean, certainly as one of the subject matter experts, we probably get more leeway, but again, our role isn’t necessarily to take sides in the debate, but it is to be an informed presenter of it, if you will. So it’s not to simply present all ideas as if they have equal merit. I try and if I’m covering the Apple versus Fortnite creator or Epic lawsuit, I’m not saying Apple’s great or Epic’s great, but I am bringing all my knowledge of both companies, the broader ecosystem, the long debate over app stores.

Ina Fried:
So we try and apply that to all of our things. Our founders, who I know you know, Jim VandeHei and Mike Allen, very big on, we don’t have an editorial page. We’re not a left-leaning blog or a right-leaning blog. We try and present things I don’t want to say down the middle because not every issue is equal. Again, not every issue is objectively open for debate. And I think to me, and we may get into this that’s really what’s changed a lot in recent years is it seems like basic science is up for debate and it’s really hard to present things neutrally when things that are matters of scientific consensus are presented as if there are multiple beliefs, multiple truths. And it’s everything from climate change to some of the things that are closer to home in technology.

James Kotecki:
Let’s talk about some of those maybe divisive issues around AI, if we could.

Ina Fried:
Sure.

James Kotecki:
And these are some issues that I think it’s just fascinating to me, how fundamentally different, certain very smart, very wealthy, very powerful people in technology are. Some people say that, Elon Musk says we’re like summoning the demon when we talk about getting too advanced in terms of AI. Other people like Zuckerberg are saying that’s not necessarily a concern. And obviously these are hugely vested, people have a hugely vested interest in making those points because of what they want their companies to do and how they want them to grow. So how do you see some of the AI issues fitting into these broader tech issues you’re talking about?

Ina Fried:
Well, let’s start with the Elon Musk and then we can get broader from there. I mean, I find that hysterical and I generally bring this up and would push my colleagues to do the same and push myself to do a better job. But look, the Tesla is filled with algorithms, it’s filled with AI. So the idea that Elon Musk is anti-AI, he’s against certain things. He’s warning us some things that we should be paying attention to. I don’t disagree that we should definitely be paying attention to what we’re doing in AI and what we’re teaching our algorithms today and what decisions we’re giving them control over. At the same time, I think it’s incredibly remiss not to point out a self-driving car is machine learning on wheels. I mean, that is all algorithms. It’s not sitting there waiting for a human to tell it. That would be a car. Self-driving car is by its nature filled with AI.

Ina Fried:
So, I think there are important points. I think I generally agree with the idea that we need to be extremely skeptical of where we hand over decision-making power. That said, there’s huge potential in machine learning. Machines can simulate things way more than we can. As we went to look for potential virus compounds to treat the coronavirus, one of the things that machine learning did early on is try out everything in the kitchen sink in simulation. That’s a really good thing. I wouldn’t want to trust AI to decide which vaccine candidate we go with. That I want human scientists doing. I wouldn’t want an algorithm deciding when to reopen schools, but I do want an algorithm and I do want machine learning looking at what are the best scenarios? How does the virus do we think the virus travels inside a confined space? So I think it’s often not a yes or no, but a how question we have in terms of when and how we use machine learning and AI and algorithms.

James Kotecki:
To extend the self-driving car metaphor a bit, the companies that you often write about some of these giants in tech, Amazon, Google, Facebook, Microsoft, et cetera. That is really where the rubber meets the road. That is where the applied science of data science and AI and machine learning actually gets implemented in a way that impacts everyone’s lives. Do you feel like the ethics conversations that we are alluding to are actually happening inside those companies in a meaningful way? Or is it really a separate academics and journalists and some well-meaning data do-gooders over here, but the actual real implementation is happening over here and there’s not much overlap between?

Ina Fried:
I think it’s a mix. I definitely think there are a lot of concerned people within some of the companies, which is good, because they’re at the forefront. They’re seeing what’s actually happening. Whereas a lot of the academics and journalists and experts see broadly what’s happening, but not what’s happening day-to-day inside the company. So I’m glad there are folks pushing for that. And I know people that are passionate about AI ethics and how AI is used that are working within companies like Microsoft and Google. I’ve been to conferences at both of those companies and I know that that’s of keen interest and it doesn’t mean there aren’t a lot of people that disagree. Particularly if you look at a company like Google, there’s plenty of workers, former workers, outsiders that disagree with the way that Google is handling it. But I know at the same time that they are having a very robust dialogue inside of what should they be doing.

Ina Fried:
And there are a lot of genuine concerns inside and out. If you work with government entity X, what could that lead to? If you look with ex-military. And I think Microsoft has this outside group, Aether, advising it on AI ethics. A lot of companies are trying to figure out where they should have that discussion. I mean, again, there’s lots of critics that say they aren’t doing enough, they’re moving too fast. They’re working with the wrong people and I’m sure we’ll get into this. There’s also, and this is happening broadly in my coverage area of tech, but explicitly around AI and machine learning is China as boogeyman. There’s always the answer of, well, if we don’t do it, China will, which regardless of how true it is, it’s probably a terrible way to make ethical AI decisions.

James Kotecki:
So let’s get into that. How real is the threat of China? Is framing it as a threat the right thing to be doing for the way to think about the future for business leaders and government policy makers?

Ina Fried:
I mean, I certainly think an awareness of what’s happening is critical. I mean, it would be foolish to not be paying attention to how the other big giant society is approaching it. At the same time, I don’t think we want to just wholesale adopt their techniques just because that’s what’s going on. I mean, it is the case that China is going to have more broad AI data because they’re willing to subject is a harsh word, but involuntarily include, aka subject their population to a lot of data collection that the rest of the world is going to say, no, people have to consent to that, or we’re not going to allow that to be collected in the first place.

Ina Fried:
Obviously the US is kind of in the middle here, if you will, with Europe much more protective of individual data rights, individual data ownership and strong consent. I’d say the US is maybe we consent, consent is kind of required, but it’s okay if it’s a check box that everyone kind of has to check to go forward and then China, no need for a checkbox. That’s a dramatic oversimplification, but useful for understanding sort of the broad positions. So if I were in charge of like AI policy or making recommendations, what I would say is, look, we don’t have to emulate China’s techniques. I don’t think we should, but we do have to be aware that that’s the context in which they are going to be developing AI. And so how we develop our algorithms, our machine learning technologies, to be able to compete in a world where we’re probably not going to have the most data.

James Kotecki:
And is there some way that the US can use it’s liberal democratic nature, can use that nature to its advantage? So if you think about China as an authoritarian top-down system, or they can just say, look, we’re going to do this stuff in AI. We’re going to collect your data. You can’t do anything about it. We have access to all this data and all these people that we can test this stuff out on and get all this data from. That’s their advantage. Is our advantage somehow that if we do this right, we’re going to have more people involved in the conversation. That means more people creating different applications for this stuff, but also more people having oversight of the machine to make sure that it does what it needs to do. Is there a liberal democratic advantage that the US could exploit in this battle, so to speak?

Ina Fried:
There can be, and that’s what I was sort of alluding to. And how do you recognize the landscape and take advantage of it as best you can. I mean, I think China’s going to be really good at developing algorithms and machine learning that answer here’s what’s best for the population. Here’s the answer of what’s best for the whole. I think where the US stands a good chance of leading is around, here’s how you apply it, here’s how you can optimize for a different value other than the collective good. So maybe that’s how do you have the most secure, whatever? How do you have the most privacy protecting whatever? I think that type of thing is very unlikely to come out of China.

Ina Fried:
I think, again, really asking ourselves, what are some of our goals? What do we want out of these algorithms is critical. And where do we want algorithms. I think we can also be a leader in terms of balancing human decision-making and algorithms. There are areas where I think algorithms can be very powerful. There are areas where they can be very dangerous and a lot of it is actually not a black and white yes or no. It’s sort of a how you do it. And I think we’ve seen this, and this is the area that I find most interesting and try and report on as much as I can, which is around where are we making mistakes? Where are we either applying algorithms in the wrong context, or more often just our algorithms are repeating our human biases?

Ina Fried:
So when you look at something like college admissions, well college admissions have always been filled with bias. So the idea of having an algorithm making decisions could be a lot better, but not if the algorithm’s programmed to let Johnny in because his dad gave $12 million for a library, then it’s just automating the process that was taking place before the algorithms.

James Kotecki:
So what’s the gap between the reason why so many of these algorithms are being implemented in what most people would consider the wrong way and how to do it right? I mean, is it a matter of education? Is it a matter of more people? I mean, I know that journalism plays a role in this because it’s about explaining to everybody what’s going on in this sector that’s going to affect all of us, but it seems like we are prone to making a lot of these mistakes. We are potentially at risk with using all this data and creating algorithms that seem more official of locking in biases that we already have across all kinds of lines. So what are some ways people can actually kind of break through here and do the right thing?

Ina Fried:
I think there’s a few different things that are tied in with that, that are all important. One, I’ll just sort of step back and take the sort of diversity piece of this, which is, conventional wisdom says, your algorithm, your approach to things is highly likely to be dictated by who’s in the room, writing it. So the chances of missing bias, of creating a bias algorithm, of not noticing biases and the training data, which is equally likely to produce bad results is going to vary based on who’s in the room. Obviously it’s not universally true, but if you have, and this tends to be the tech industry’s problem, a predominantly white predominantly male group, or in some cases, an all-white or an all-male group, which is especially problematic, that sort of is the equivalent of going into the decision-making process with your blinders on.

Ina Fried:
And it’s not to castigate white people or castigate men or castigate white men. It’s just, we all have different experiences. So the more different experiences are reflected by the people making decisions in theory and again, it’s not always the case, but in theory, you’re going to have a better approach. The other thing I would say that we’ve learned is AI isn’t inherently explainable. A lot of the best state-of-the-art machine learning today can work in a black box, but that’s not good for fairness. It’s not good actually for learning in general. So one of the pushes is you actually want to design an AI system to be explainable, which takes objective work. It takes work. It’s not the default. So by default, it would just be a black box and you’d get an answer. If you want an AI system that can explain why it came to the conclusion it did, you have to build that from start. So explainable AI super important.

Ina Fried:
Again, one is a matter of fairness two, so we can see where we go wrong because invariably, we will go wrong. But also, so we can learn. We will learn better if we have explainable. So that’s one thing. So who’s in the room, push for explainable AI, and really looking at the impacts and keeping a close eye on these things. So if we’re using an algorithm to determine who’s getting into college, who’s getting parole, who’s getting a loan, looking at the results. Are they having the results we intended or are they actually making an unequal system, perpetually unequal? And in some cases, even worse. And again, it’s not casting aspersions on the motives of people except to say, hey, just adding an algorithm, doesn’t make anything inherently better.

James Kotecki:
I’m totally with you there. And I wonder where you think the most effective pushes are coming from. You said there’s a push for some of this, but I wonder, is that push coming from the ground up? Is it coming because companies are afraid of an algorithm going bad or doing something that’s perceived as bias down the road and so they have to take precautions now because they see other companies get into trouble in the headlines? Is it coming from regulators and policy makers, or are they even capable of understanding this? I mean, if you look at some congressional hearings, there’s not a lot of confidence that at least at the highest level politicians even understand the internet, let alone artificial intelligence and machine learning. So where is the pressure coming from and how real is that pressure now and in the immediate future, do you think?

Ina Fried:
I think it’s coming from a few directions. I think the direction it’s coming least from is the one you alluded to. I don’t think it’s generally coming from certainly at the national level US leaders, except in a few specific areas. I think, and this, again, it’s not as much at the national level, but often at the state level, around things like government use of facial recognition technology. So in some narrow instances you have seen regulators take a look, but I think in general, the pressure to do better and to examine best practice’s coming from a few different areas. It’s certainly coming from the civil liberties communities, from activists in those areas, from academics, that’s definitely one area.

Ina Fried:
Again, you can criticize what the companies do and plenty of people do. And there’s some legitimate room. I do think it’s important to recognize that some of the push for more fairness, more diversity, more explainability is coming from within these companies. So within Google, within Amazon, within Microsoft and sometimes at the highest levels. So I know this is something that Satya Nadella at Microsoft, Sundar Pichai at Google do take personally. Again, not to say there isn’t room to criticize. And certainly Google has gotten a lot of criticism for some of these issues. So their pressure is coming from different places.

Ina Fried:
It’s also, when we talk about governments, it’s more likely to come out of a place like Europe, which has a very strong notion that an individual is the owner of their data. We don’t really have that sense. It’s very transactional in the US, like people would give away their password for a candy bar studies have shown. And certainly, I think people are plenty willing to quickly give away their data. So there aren’t as many rules in the US but we’re starting to see some, we’re starting to see some models on GDPR in Europe. We’re seeing some California has been, again, one of those states pushing for more protections around consumer data. And all it takes in the US is one big state to do it and suddenly the cost becomes prohibitive. So there are some different corners, I would say, pushing.

James Kotecki:
When you were mentioning the thing about trading your password for a candy bar, I was picturing exactly the candy bar that I was going to trade for. And I was like, which of my passwords do I not really need that much? Almond Joy in my case for the record.

Ina Fried:
I support you trading your data for a candy bar, but not for coconut. Sorry.

James Kotecki:
Okay. What would your candy bar be of choice?

Ina Fried:
That’s a great question. I’m more of a chocolate chip cookie person, but probably so a really nice dark chocolate, maybe with some chili inside.

James Kotecki:
That sounds delicious. We’re asking you how many questions here on-

Ina Fried:
Is that a good trade for 1234 password?

James Kotecki:
… I think of, Oh, dark chocolate with chili 100%. That’s like Gmail password level, in my opinion. All right. So you mentioned European regulators kind of driving a lot of this. There has been a narrative of a potential divide between the US and China, right? The idea that there might be eventually two separate internets in some ways they’re kind of already is between the US and China. And then if you look at Europe as the third leg of this global stool, the US and Europe are relative are certainly much more linked in terms of internet and in terms of kind of cultural or values about the internet and freedom and data privacy, probably more so than China, right? Although I guess you could also say the US is in the middle of Europe and China.

James Kotecki:
So I’m just curious to your take on what I’m about to ask, which is, if Europe and the US are relatively linked, but then Europe is driving on AI regulation to a certain extent. Is there any talk of a risk of a divide between Europe and the US? Is there the idea that there could be three internets or three AI regulatory systems that govern kind of the global state?

Ina Fried:
I mean, it is a very active debate, and I do think that’s perhaps a risk is how you’d describe it from the US perspective. I think from European perspective, sometimes that’s seen as a good thing. Again, just as we look to China and we say, we don’t want those policies. Europe often looks at the US and says, we don’t want those policies. And there actually is a divide. There’s a divide right now, there are rules that are actually being contested in European courts over where data can be warehoused and what counts and how the US has to behave to comport with European data regulations.

Ina Fried:
Now, I think the positives for the US are just as the US has to compete in some trances against China, knowing that we’re going to have less data access, so do Europe has to compete against US and really the entire world, knowing that it’s going to have the least data because its privacy practices are even more restrictive than ours. So you could argue the US is kind of like Goldilocks in the sense of it’s too permissive in China, it’s too restrictive in Europe. It certainly does position us as a more likely ally of Europe. Although we’ve certainly tested our allyship with Europe in the last few years on the political front, on the basic approach to data, it’s hard to imagine Europe aligning itself with China.

Ina Fried:
And we are seeing a real split between US and China. That’s something I’ve written a lot about. I wrote led the newsletter last week with an article on the great decoupling in tech of the US and China. And to me, it’s still sad. You can have differences without saying that the best way to deal with those differences is to just build an entirely separate ecosystem. And I think we had this highly interdependent technology ecosystem where a lot of the chips and software were developed in the United States, most of the manufacturing was done in China. And that’s the way we’d operated for my whole career.

Ina Fried:
And in just a few years, we’ve moved very far down the path of not only are the US and China further isolating themselves, but sort of by necessity. I mean, if you are a startup in China or the US why would you ever build a company that’s dependent on technology from the other knowing how easily it can be cut off? That’s my big worry is that even if calmer heads prevail and the US and China find a way to work together, will the companies themselves ever want to risk being dependent, knowing that the next leader in either country could push things in the wrong direction?

James Kotecki:
Speaking of massive global shifts, we haven’t talked about COVID-19 yet, but we’re both working at home for a reason. COVID-19 and its influence on the tech space has been covered extensively. What do you think is an under-reported tech or AI trend that COVID-19 is driving?

Ina Fried:
One of the things is a forced expansion of algorithmic use. So one of the most public examples is content moderation. So Facebook, Google with YouTube, Twitter had always relied on algorithms to do the bulk, the masses of their work and used humans to handle the edge cases, the most sensitive things. But what COVID-19 brought about was a massive expansion of the reliance on algorithms to do it. And the reasons were not just because you and I got sent to work from home. That’s a piece of it, but where it really came in is a lot of the human content moderators were contractors. And they were designed to work together because they were handling sensitive data, personal data, and they didn’t actually work for Google and Twitter and stuff. They were contractors. They could only do their jobs within the four walls of their contractor’s offices, where there was heavy security protection, et cetera.

Ina Fried:
So it wasn’t a job unlike a lot of tech jobs that easily translated to, okay, now just do it from your home computer. Long-winded way of saying a lot more content moderation is being handled automatically, is being handled by algorithms. That’s actually a great test case. I think we can learn a lot from seeing from going as long as we go back and look at how were the decisions, what were the blind spots? I think it’s a great way to learn from how algorithms are doing, and let’s not forget algorithms can be really good at this.

Ina Fried:
I think critics of AI and critics of machine learning, underestimate just how useful a good algorithm can be. A good algorithm with good training data can make society fairer more just, can spot biases that humans didn’t even notice. But a bad algorithm, really, again, just codifies and I’ll use an annoying, big word perseverate, but repeats this sort of bad behavior that used to take, we used to have to train each bias judge, and there was an opportunity in each generation to have an unbiased judge. If we don’t do that with algorithms, the algorithms will perpetuate biases.

James Kotecki:
In the waning moments here, two questions for you. One are you inspired by sci-fi at all? A lot of the people I talk to, especially kind of tech entrepreneurs are inspired by science fiction at all. Does that play into it at all? As you kind of see some elements of sci-fi maybe coming true in different shapes in your work, or is that not?

Ina Fried:
I think it informs the way we look at things for good and bad. I’m not a big sci-fi person, but that doesn’t mean that that isn’t what creates our mind, I of what. Algorithms are of what robots are and we haven’t really talked about the difference between machine learning and very specific things and sort of further off notion of artificial general intelligence, which is probably good because I don’t think any of us really knows what that world’s going to look like. We’re still pretty far from that, but-

James Kotecki:
Well, that’s where I’m going next actually. Yeah.

Ina Fried:
… Yeah. We should be setting rules now and we can use our sci-fi generated experience to at least know where we should look for the demons.

James Kotecki:
So that’s my final question to you. On a scale of one to never, how long until you think just taking a guess and so we reached some kind of artificial general intelligence?

Ina Fried:
I think it will, I tend to be in the middle on a lot of these things. I think it won’t be as soon as some people worry, but it will be sooner than people think. I do have questions after 2020 of whether it will happen before we make the planet inhospitable to human life or find another way to wipe ourselves out. But I don’t think of it as so far off in the horizon in part because technology and I think our human knowledge is accelerating, but I don’t think it’s imminent. I think we are teaching today those systems how they’re going to learn. We’re setting the guardrails, we’re setting the rules. So there is an important role to be played now, even if we’re quite away off from computers that are just making decisions on their own.

James Kotecki:
You mentioned Elon Musk, we mentioned Elon Musk toward the beginning in this conversation. Maybe he will take you to Mars and you’ll be safe from the devastation of the planets in the next several decades. So who knows? Ina Fried, you are the Chief Technology Correspondent at Axios. Your newsletter is called Login. How can people sign up for Login by the way?

Ina Fried:
Thank you. Yeah, it’s free. I do it Monday through Friday with a great team and just go to getlogin.axios.com and you can subscribe to Login as well as all of our other newsletters around health science, sports politics, et cetera.

James Kotecki:
Well, Ina, thank you so much for joining us here on Machine Meets World today. I really loved this conversation. And thank you so much the person who is watching and or listening to this, I really appreciate it. Please like us on LinkedIn connect, share, you know what to do. I am James Kotecki, and that has been what happens when Machine Meets World.

Image for post
Image for post

Originally published at https://infiniaml.com on August 25, 2020.

Machine Meets World from Infinia ML

Weekly Interviews with AI Leaders

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store