Watch the full interview with Tara Chklovski.

Technovation CEO Tara Chklovski on Unpacking AI’s Black Box

Join Machine Meets World, Infinia ML’s ongoing conversation about AI

James Kotecki
Jun 23 · 23 min read

Episode Highlights

This week’s guest is Tara Chklovski, CEO and founder of Technovation.

“Most adults don’t know how machine learning works, how the face recognition on the phone works, what’s behind all of this. And that’s the idea — kind of unpacking this black box of AI technologies because it’s everywhere.”

“And when you don’t know something, it can be very scary. You don’t know what the implications are and so you tend to sort of shut it down even more, and you’re resistant to any kind of re-skilling, up-skilling, which will have to happen no matter what, right? Because it is changing our world in so many different ways.”

“It’s less about creating a world-changing solution and more about creating a highly effective solution for a local population. And I think that in our mind is sort of AI at the grassroots level, which is giving a voice to everyone rather than from a top-down type of solution where you’re like, ‘Okay, we’re going to deploy this face recognition system,’ or whatever it is, ‘for everybody.’ And then later on realize, ‘Okay, well, it’s not working.’

“So this is a bottom up approach where you’re empowering people to create more rich data sets basically, and come up with their ideas of potential AI based solutions that are attacking very, very unique problems.”

“I think [social, emotional learning] needs to be blended into the hard skills that students are learning because it shouldn’t be just like, ‘Okay, that’s what I do in my SEL class.’ It needs to be blended into ‘how do I apply it when I’m actually solving a complex problem using technology.’”

Watch the show above. You can also hear Machine Meets World as a podcast, join the email list, and contact the show.

Audio + Transcript

James Kotecki:

Hey, and we’re live. There’s literally no un-awkward way to start a live broadcast where you’re clicking the live button yourself. But this is Machine Meets World from Infinia ML. I am James Kotecki, your host. We’re talking artificial intelligence today with my guest, the CEO and founder of Technovation, Tara Chklovski. Tara, thank you so much for being here.

Tara Chklovski:

Thanks, James. This will be fun.

James Kotecki:

I hope you have as much fun as I’m about to have. So the first question, the question I love to ask people to kind of get into things is Technovation actually is one of those great names that it could literally be a name for anything in the tech space and it’s kind of this all-purpose name. How do you define Technovation? How do you explain it, especially to somebody who’s not even in the tech space?

Tara Chklovski:

So the short answer is yes, we actually, I started the nonprofit and we were called Iridescent before and a friend of mine, she went to Startup Weekend and she was very, very impressed by the experience of Startup Weekend and felt that young girls should have that experience. And so she created this sort of competition that was an evolution of the Startup Weekend for high school girls.

Tara Chklovski:

And we were looking for a name and she was like, “It has to be grounded in technology because technology changes the world and it also gives an individual so much power.” And innovation is the spirit of humanity, right? Like you’re always looking for something new, something better. And so we blended the two names together and that was Technovation. And then over time, that program has become very, very exciting and empowering. And the competition model, of course, is core to what drives, motivates people to finish. And so we changed the name from Iridescent to Technovation.

James Kotecki:

And so Startup Weekend, for people who aren’t familiar with the model that you’re basing this on, or really what you’re doing now is a kind of a competition. You’re saying it’s focused on young girls to create technology businesses, apps over the course of a set period of time with some kind of mentoring framework around that. Is that right?

Tara Chklovski:

That’s right. So, actually the girls part is one part of our program, but our main pillar is reaching families, so parents. And the idea is that like Kofi Annan says, “You’re never too old to learn and you’re never too young to lead.” So that’s sort of the spirit of the organization and parental co-engagement, co-learning, lifelong learning, is the bedrock of the program. And so the competition framework is one that time and time again, I mean, from the time of the Olympics, right, like when they started, it’s usually motivating. And so that’s what we use as the framework.

Tara Chklovski:

So you find a problem in your community. You learn how to use very, very powerful technology to tackle that problem. 10 years ago, it was mobile. Now it’s AI. And so that’s sort of how we see our role. Maybe in the next 10 years, it’ll be synthetic biology that you’re using to solve a problem in your community. So those are sort of key elements of what makes this effective.

James Kotecki:

Effectively, what your organization is doing, looked at from one lens and kind of the lens that we see our show in, is all about AI. And so that’s what we really want to talk about here. Effectively what you’re doing is you’re teaching AI to kids, right? Kids come into the program. I assume that many of them have never done anything related to AI before, right? And so you’re working with a blank slate.

Tara Chklovski:

Totally. And not just kids, but also, like I said, the parents or the adults. Most adults don’t know how machine learning works, how the face recognition on the phone works, what’s behind all of this. And that’s the idea — kind of unpacking this black box of AI technologies because it’s everywhere. And when you don’t know something, it can be very scary. You don’t know what the implications are and so you tend to sort of shut it down even more, and you’re resistant to any kind of re-skilling, up-skilling, which will have to happen no matter what, right? Because it is changing our world in so many different ways. So it’s not just about the children, but it’s also about the adults who need to be more open to understanding how these technologies work.

Tara Chklovski:

The other element of that is sort of privacy and all of that. Like even when you don’t know what these tools are doing and how you’re interacting with them, it’s just general consumer best practice that you understand a little bit about how these technologies are working. So you’re looking at the fine print or not, right? Like at least you have some sense of what is this, who’s using this data and how, and how can I protect myself? How can I be an informed user? Like you look at the nutrition labels of what you eat, but you’re not looking at the tech and the apps that you’re using with that same frame of mind.

James Kotecki:

So you’re taking an audience that has very little understanding of AI. You’re immersing them into this world. They’re coming out of it with products, apps, businesses, I guess, the families are creating?

Tara Chklovski:

Yeah. So the idea is that they come up with a unique prototype for an idea that can be used to tackle a problem that’s very specific to their community. So for instance, there was one family in Bolivia that was looking at an invasive weed in their largest Lake, which is the Lake Titicaca. And they trained the model on recognizing that particular invasive weed.

Tara Chklovski:

Now a lot of the plant data sets that are in use have data that’s coming from North America and Western Europe. And a lot of the biodiversity in the world is beyond just these regions, but the data sets are not representative of that. And that’s true of a lot, right? All of the facial recognition issues are also around that. So a big part of this is creating data sets that are more diverse, richer, more applicable to the local communities around the world.

Tara Chklovski:

And then another example was a family in Kuwait actually created a recognition system that would look at birds because a lot of the bird population is dying out. And you would not normally think about, a technologist in Silicon Valley would not normally think about creating these types of prototypes.

Tara Chklovski:

And so it’s less about creating a world-changing solution and more about creating a highly effective solution for a local population. And I think that in our mind is sort of AI at the grassroots level, which is giving a voice to everyone rather than from a top-down type of solution where you’re like, “Okay, we’re going to deploy this face recognition system,” or whatever it is, “for everybody.” And then later on realize, “Okay, well, it’s not working.”

James Kotecki:

Wow.

Tara Chklovski:

So this is a bottom up approach where you’re empowering people to create more rich data sets basically, and come up with their ideas of potential AI based solutions that are attacking very, very unique problems.

James Kotecki:

I’d like to speak a little bit more about the international dimensions here because your program is an international one. I think people should know that. And do you have additional thoughts on, we talk a lot about AI bias as far as a bias against certain genders or sexual orientations or racism built into AI. But still those stories are often seen through the lens of the dataset, as you’re saying, that comes from North America or Europe largely, right? And so how big of a problem is AI bias on kind of the continental level, I suppose you could say? And is it a problem that we’re really doing a good job as a society or as tech companies of tackling, because it seems like you could have a situation where certain parts of the world skyrocket ahead because they’ve got all the data and the AI talent and the rest of the world gets increasingly left behind.

Tara Chklovski:

I mean, I think I don’t want to dip too much into that because I also see the world through my particular lens, which is that of education. And I think, yes, there are many things wrong, but education is a huge, powerful lever that counters that. And that’s one of the reasons why I started the nonprofit in the first place. So I think yes, all of that exists. Yes, that is a problem. But instead of trying to like, what is a counter or what is a solution to that? And I think the solution to that is a slow solution, but one that we have to undertake, which is educating the public. And I think over time, you see people are getting kinder, we have fewer wars, I mean, we are just improving as a society in general and that’s because of education.

Tara Chklovski:

And so I think it’s a similar kind of upward trend. The issue is that the technologies are developing way faster than people are getting educated, but hopefully, we’re trying to address that. So I think the core of approach here is that empower people and especially women, especially mothers, because they have such a huge role in shaping their children’s mindset and behaviors and having them be open to learning how these technologies work.

Tara Chklovski:

So only once you understand, “Well, if I click on this, now it’s going to recommend more like this to me, this is why I’m getting all of this crap in my feed and it changes my behavior online,” I’m just going to be more and more aware of my actions, right? And so we have so much around step counting and calorie counting, but not so much around our use on the internet and our online footprint, as such, at least for the older generations.

Tara Chklovski:

And that’s what we’re trying to tackle, but not from a negative point of view where this is why you should be afraid of all the things and their taking your data and this is the big brother watching. No, not from that point of view, but from a point of view of agency where now you know how this works, you try to create something that’s better, right, that meets your needs. And that’s going to fuel you to keep learning rather than from a point of view of fear or, “Look how bad the system is.”

James Kotecki:

It’s so interesting what you’re doing, because when we talk about how we’re going to build the future of AI, kind of on a macro-societal level, we talk about needing to bring as many people into the conversation as possible. It can’t just be the engineers, the data scientists, the technologists who are making these decisions, which are often not even technical decisions, but core ethical decisions, core issues that go with the issues of free speech and all these other things that we hold as kind of human values.

James Kotecki:

What have you learned about bringing people into that conversation? Is it because to say, “Oh, we have to educate everybody about AI, so they’ll better understand it,” it seems kind of daunting, right? Because AI is, as you said, one of those issues that’s evolving so rapidly it’s very difficult for people to get a handle on. And there’s an inherent fear when they come to it. So what lessons can you kind of teach other people who want to get more folks involved in this conversation about how to expand that circle of who’s talking about AI?

Tara Chklovski:

Yeah. And I think, again, we’re not the experts on this, but we are making some steps. We’re not sort of saying like, “This is too big for us to tackle. We’re just going to bury our heads in the sand.” But so last year we worked with a law firm, Hogan Lovells to create sort of a AI inventors ethical guidebook for students and for children and families, just giving them some guidelines on, “Well, think about this, think about this, think about the edge cases, the corner cases.” And that was one step in the direction.

Tara Chklovski:

This year, we are working with this amazing professor. Her name is Batya Friedman. She wrote the first paper in computer science around machine learning bias, bias and machines. I think she wrote it in the early ’80s or ’90s, actually. And this was before bias became a real term like it’s being used so commonly now. And she has just published a book called Value Sensitive Design.

Tara Chklovski:

And this is a term that I’d never heard of it before, it’s called moral imagination, inventing the moral imagination, right? Like we’re always thinking about sort of creativity and technical imagination, technical creativity, but never from a point of view of moral. And I think it’s becoming more and more important that you cannot create products that have such massive scale and impact without thinking about the values of your users.

Tara Chklovski:

So they actually have a whole series of methods that they use or recommend for inventors to think about, and it’s like the direct stakeholders. I mean just a whole series of steps and it can be done. It’s just that people don’t like to. Human instinct is like, “I have an idea. I want to go build it right away and deploy it and get my quick positive feedback.” But you just have to sort of have some discipline here because you realize that you have a very sharp knife. You’re not just going to wheel it around anyway, you’ve got to take the necessary courses to learn how to use it.

James Kotecki:

And perhaps one of the issues is that the people who are designing this technology today that are in the companies that are creating it, never had any kind of training in this because, of course, it wasn’t a thing when they were kids, so they never had a chance to kind of build up that internal ethical frame because their parents didn’t teach them about it. It didn’t exist when they were kids, because the technology is moving so quickly. What have you seen about how quickly kids can kind of parse and figure out the ethical and moral dimensions of this? When you bring this up with kids, what do they say? How do they act?

Tara Chklovski:

I mean, look, fairness is inbuilt in us, right? Like even a three year old, if you give one three-year-old something and another three-year-old something else, they’re like, “Oh, that’s not fair.”

James Kotecki:

I know that well. I knew that well. Yeah.

Tara Chklovski:

Right? So it doesn’t take time at all for kids to understand when something is not fair. And so I don’t know if you know about that study where I think Frans de Waal did this experiment with capuchin monkeys. Have you heard about that one?

James Kotecki:

No, no.

Tara Chklovski:

It’s absolutely awesome. You should look at the video. They put capuchin monkeys in two different cages and they’re both getting cucumbers. And then in return, they have to give a stone back to the researcher and then they can see what the other capuchin monkey’s getting. And then suddenly the researcher gives one monkey a grape, and then she gives the other capuchin monkey the cucumber. So the one who gets the cucumber throws the cucumber back at the researcher because, [inaudible 00:14:52] because the other one got a grape, right?

James Kotecki:

Mm-hmm (affirmative).

Tara Chklovski:

And I mean, that concept of fairness, it’s just inbuilt in us, right? So it doesn’t take time at all for kids to understand that, it’s just that we’re not being taught this. And I think there’s a key part of what we are trying to build out in our AI curriculum is it’s not just enough to understand the core technical elements of it, but if you’re creating a prototype or a product that’s going to be deployed into the world, you need to understand sort of the behavioral barriers to adopting it. You have to understand all of the elements of this, create sort of, understand the moral framework of who’s going to be using it, what are the impacts on the people who are going to use it, who are going to be indirectly affected by it, then also understand sort of your own journey as an entrepreneur.

Tara Chklovski:

Because just because you deploy the product, doesn’t mean your journey ends there, right? You’re going to have all sorts of elements and thinking through the whole element, and then finally, recognizing that you’re probably creating a solution for a complex system and understanding what are the rules of that system, what are the feedback loops of that system? You don’t learn this in school. And so what happens is that as an engineer, you develop a product, deploy it and realize, “Oh my gosh, that thing broke and that whole community,” and you get this negative feedback loop. Now you’re just doing whack-a-mole right?

James Kotecki:

Right.

Tara Chklovski:

And so I think these are critical skills and the World Economic Forum has sort of outlined like what are the three main skills students need to be learning now as cognitive abilities, it’s systems thinking and complex problem solving. None of these three are taught in school.

Tara Chklovski:

I think social, emotional learning is being taught more and more in school, which is awesome. But I think it needs to be blended in into sort of the hard skills that students are learning because it shouldn’t be just like, “Okay, that’s what I do in my SEL class.” It needs to be blended in into “how do I apply it when I’m actually solving a complex problem using technology.”

James Kotecki:

And with AI and machine learning specifically, the problems are even compounded, right? Because you could have data drift over time. You’ve mentioned that you’re deploying these things in the complex situations while the real world data that’s coming in is not always going to match the data that you trained on. In fact, it definitely won’t. And so over time you need to be maintaining these systems, upgrading them, tweaking them, and at the very least keeping an eye on them to make sure that they’re not doing something that you didn’t intend for them to do. All of which, as you mentioned completely different than how you think about a typical academic environment where you take a test and then you can forget about it because you can move onto the next thing.

Tara Chklovski:

Yes. And I think that’s that piece of metacognition, right? Like understanding yourself, like when do you get bored? And when like, “Okay, you’ve created your product, you want to move on to something, but guess what? It’s live in the world, right? Now all these people are using it. It’s broken or it’s not working properly. And you have a moral responsibility because it’s affecting somebody’s life,” right?

Tara Chklovski:

And I think Batya Friedman gives a very interesting example of the person, like think about it, somebody who created a cell phone. You first think that the main person that who’s being affected is, of course, the cell phone owner, but she’s like, “Think about the situation when you’re in a crowded bus or a train or an airplane, and somebody is having a loud phone conversation. Guess what? You’re an indirect stakeholder because you are being affected by it, too.”

Tara Chklovski:

So think about what kinds of features you would want in your product that affect those people, because your products are never going to be in an isolated bubble. And I think these are the kinds of things that we want to make sure that we are giving enough tools and steps for people to be able to practice thinking along those lines where it’s not just you. You’re living in a highly connected web of humans, right, and they’re all affecting each other.

James Kotecki:

Yeah. Which of course the pandemic has also shown us in other ways that everybody is conducted. Is it different by the way, just as an aside, is it different during the pandemic, your curriculum? Does it take on a different flavor or does it look different in light of the major changes that are kind of whirling in our world right now?

Tara Chklovski:

I mean, interestingly, we are basically an online curriculum platform. So our lives haven’t changed, but of course the participants who are taking part in our program, they have not been able to physically meet with mentors. They’ve not been able to physically meet with each other. And so motivation drops, which is true across the board.

Tara Chklovski:

And for many, they don’t have access to internet or technology. So their ability to even access the curriculum has dropped. So I think we saw like 15% decrease in ability for students to finish the full program but we saw almost like a 50% increase in traffic to our resources, from different places. Children stuck at home doing something meaningful. And we created a video series just for that, because I think the pandemic makes you feel hopeless. There’s so much sort of suffering, but you don’t need to.

Tara Chklovski:

And I think there’s one very, very interesting book by the Dalai Lama and Desmond Tutu written together that says the best sort of antidote for suffering is to think about who else is suffering. It moves it. The moment you start thinking about somebody else, guess what? You stop thinking about your own problems. And that immediately makes you feel better. So, that’s the very first step.

Tara Chklovski:

The second step is if you actually start to create something that helps someone, now you’re spending even more time thinking about somebody else and less time thinking about your own problems. And so that is the key to getting out of this, why building powerful skills is to try to figure out how can you help your community. And we’re seeing so many young people stand up and create apps that help people in the pandemic, help people connect with one another. And then I think we’ve also seen a lot of students create apps that tackle systemic racism, police brutality. And this is not been just recent. It has been over the years, which is the sad part.

James Kotecki:

Does the global diversity, does the gender diversity, the family, different family perspectives, can you give me some additional insight into how that plays into people’s ability to come up with these different solutions? You mentioned it before, it’s kind of a grassroots idea of people building things that are needed for their communities. We’re also at a time right now, especially in the United States, but all around the world where we’re having an increased focus on racial issues, diversity issues, and how a lack of voices at the table, especially in tech, frankly, if we’re just going to focus on the tech industry, may be hurting people’s ability to come up with the best solutions.

Tara Chklovski:

For sure. And I think, I mean, we have, I don’t know, like maybe 6,000 or 7,000 apps that the students have created, families have created, over the past decade or so. And hands-on like, most of these apps will be apps that you will never see coming out of the usual tech sector because they’re just so, so unusual and innovative.

Tara Chklovski:

So I mean, the list is endless, but I can start like in 2010, I think even before Uber was started, a team of girls in New York created a Hail New York taxi app, and this was really like they did it on their own. They did not know that there was this ride sharing type of a thing. Then I think that same year a team of girls created a Pinterest type of app, sharing really beautiful photographs and things. And then the list just keeps going on and on from there.

Tara Chklovski:

I think many years ago, one app that really sort of moved me was an app coming out of Egypt where these middle school girls created a special safe forum for victims of child marriage, that was a very, very safe space for them to be able to share and support each other. And I was like, “Wow, these young women, these young girls are rebels, because they’re going against their societal norms, their family norms to really create this technology goal sort of solution for problems that they’re experiencing.” You’ll never get that app here.

Tara Chklovski:

And then from sort of from a racial diversity point of view, there was one app created by a team of African American girls, specifically for athletes who were African American girls, because they were like, “They’re not enough role models. We only have, we only hear of Serena Williams and Venus Williams, but there are so many. We don’t have people to look up to and to feel proud. And we are never encouraged as much as other girls. And so we created a specific app for that.”

Tara Chklovski:

Then there’s so many apps that African American girls have created to help others walk in their shoes. So empathy type of apps, case stories type of apps, where you get trained in situations. And these were even before sort of all of this unconscious bias training started to become popular.

Tara Chklovski:

So, of course, young people are always ahead in terms of what they are thinking. And then when you empower them with very, very powerful tools and you say, “Okay, find a problem that is big for you, for your community,” it’s amazing what they come up with.

James Kotecki:

It sounds like a lot of these applications are highly practical and kind of grounded in people’s immediate lives and truths. And that’s really at odds with, I think the way that some people perceive of AI as this magical force, that you can kind of sprinkle on something or this sentient robot that’s going to come in and either solve your problems or take over the world. Have you found that children, kids are and students are perceiving of AI in that magical way? Or is that kind of a relic that maybe adults are holding on to?

Tara Chklovski:

I think it’s across the board. I think it’s more because of our curriculum, right? And that’s the value of education when you learn about something, the sort of curtain falls, right? And it’s the same thing here. They come in really not knowing what AI is, and then they’re like, “Okay, well, it’s a machine. You’re training this data set. You have to build this data set. This is all the problems with the data set and all of that.” And it’s a tractable problem.

Tara Chklovski:

But yeah, you’re completely right. I think it’s a function of the curriculum where we ground it into find a problem that’s applicable to machine learning techniques and that matching is not always easy. And I think that’s our responsibility as educators to make that clearer so that the students are able to, and the families are able to come up with feasible solutions.

Tara Chklovski:

One of my favorite ones is a mother and daughter team from Palestine. They created an image recognition system to look at children’s drawings to see if they were suffering from depression or bullying, because you can sort of, and they worked with a psychologist to train that model. Incredibly practical, valuable, but again, you could never have thought of it, right, but they experienced that problem. And so then they used this technology to help them.

James Kotecki:

One of the other perceptions that people have about AI is being scared about their jobs. Do you get people that come into the program with that perception? Or what do you say to people that are concerned about jobs? Because it seems like what you’re doing is offering a real entrepreneurial, a totally different angle on this, which is don’t be afraid of the technology. Use it, create your own job, create your own company, whatever that might be. But I assume that you run into this fear, this kind of job loss fear quite a bit.

Tara Chklovski:

For sure. And I think that’s also a lot of the reasons why some of the parents come. They come because they want the children to get a better education but they’re also curious about this, right? Because they’ve heard it from the company, maybe their factory line is shutting down and they’ve heard like, “Okay, there’s this thing that we need to learn these new technologies,” or whatever, but whatever is offered online is so scary and dry and boring.

Tara Chklovski:

And then this one is a fun way to spend time with your kids and basically bond with them. The kid learns to, but then you may learn something, too. And that’s our trick really? And the family co-learning like so many of society’s core activities are done in multigenerational groups, right? Like you go to church together, you go to a party together. Like these are bonding activities, but learning typically is very segregated into different age groups. Why, right?

Tara Chklovski:

And so that’s one of the key reasons why we want to change that and to build those social human connections and to allow the intergenerational ideas. So one of the things that we are, I’m just trying out now is a grandma’s coding club. Because again, I think you’re never too old to learn. And these grandmoms have iPhones and they use face recognition, they use voice recognition, but they don’t know how it works, but they’re curious. And so I think there’s no reason why, and they want to connect with their grandchildren. So I think these are just mindset issues that we have to overcome and you can overcome them by being inspired by people like you.

James Kotecki:

In the final 60 seconds that we have, I really like what you’re doing, so I want to give you a chance to plug it to the fullest extent possible. If people are interested, where should they go to find more, what should they be looking for? How can they sign up or help you?

Tara Chklovski:

Yeah. I mean, yes, any kind of support. We are always looking for mentors, for people, for volunteers to inspire our participants. Go to technovation.org, to sign up to be a mentor, a volunteer. We’re going to run a pilot actually with UNESCO next month and to specifically support girls in many countries to combat sort of the pandemic learning loss. Girls are in the crosshairs of that. And we would be looking for awesome individuals to come learn with them.

James Kotecki:

Well, great. And mentors can be-

Tara Chklovski:

Anyone.

James Kotecki:

… people that, are they primarily technologists or really anybody?

Tara Chklovski:

No. Anybody who wants to learn because the whole point is that you want to model to a child, “I don’t know. Let’s go find out.” That’s all. So that’s the key to staying young.

James Kotecki:

Well, I really appreciate this interview. In a very crazy time for the world, you’ve delivered a very feel good interview here because I feel good about what you’re doing and how you’re helping. And so I’m sure that people watching would feel the same. So Tara Chklovski, thank you so much for being on Machine Meets World.

Tara Chklovski:

Thanks, James. Have a great day. Bye.

James Kotecki:

Thank you so much. And thank you for watching and/or listening depending on which one you just did. If you’re watching this as a video, you could listen to it as a podcast and if you’re listening to it as a podcast, guess what? You can watch it as a video. This has been Machine Meets World, a production of Infinia ML. Email us mmw@infiniaml.com. I am James Kotecki and that is what happens when Machine Meets World.

Image for post
Image for post

Originally published at https://infiniaml.com on June 23, 2020.

Machine Meets World from Infinia ML

Weekly Interviews with AI Leaders

James Kotecki

Written by

VP of Marketing & Communications for Infinia ML, a machine learning company. Speaker from North Carolina to South Korea.

Machine Meets World from Infinia ML

Infinia ML’s weekly interview show with leaders in artificial intelligence.

James Kotecki

Written by

VP of Marketing & Communications for Infinia ML, a machine learning company. Speaker from North Carolina to South Korea.

Machine Meets World from Infinia ML

Infinia ML’s weekly interview show with leaders in artificial intelligence.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store