Before her current role leading MIT AI startup Nara Logics, Jana Eggers drove innovation at a variety of companies, from nonprofit CRM Blackbaud to on-demand shirt printer Spreadshirt to products at Intuit. Before that, she worked at Los Alamos. A frequent speaker and practical thinker in the fields of AI and innovation, she urges audiences to listen better and think about culture, often humanizing an otherwise technical field.
She’ll be taking the stage in a couple of months at Pandemonio, so we sat down with her to get a bit more detail on artificial intelligence, whether it will be friend or foe, and who stands to benefit the most from a algorithm-driven world.
Pandemonio: Is Elon Musk right about the singularity?
Jana: Oh, gracious, we are starting there? Elon’s said quite a bit about AI and has raised fear levels. I have thought before that we were in a simulation myself, i.e., the singularity has already come and gone and we are the artifacts of it. Déjà vu can send you that way sometimes.
Two important points to me on this:
- We need people to be more actively involved in AI and its creation versus hand wringing about a singularity event. We have finally reached a technical point that AI can start working in our daily lives. In order to make sure we understand how it is controlling us, versus us controlling it, we must take a hands-on approach. And yes, that means working together with nerds like me and my cohorts to make sure we, together, build the objectives for each step along the way. We need a diversity of hands guiding this bit step. I see this similar to the opportunity we likely missed with climate change. We could have changed courses and gotten ourselves onto the right path. Now we will have to “science the shit” out of a solution. Let’s not do the same with AI.
- If we are in a simulation, let’s make it a kind and fun simulation. Obviously, the creators are giving us some control. Why not surprise them with some amazing outcomes, like how we are the first simulation to cure cancer? The point is, why does that matter whether we are or not? If in the simulation afterlife, we get to step out and see how we’ve done—don’t you want to be proud?
P What’s the difference between statistics, programming, machine learning, and AI?
J First, to give you my perspective, I’m a mathematician, and my least-favorite math class was statistics. For me, statistics are too often used merely to prove a point, rather than to learn. And when we get things “wrong”, like oh, say an election prediction, lots of blame goes to the statistician, who honestly, just produced the numbers. We believed in them what we wanted to believe, like that 70% = 100%. So with that large grain of salt, here are my answers that are given not as true definitions but useful guidelines:
- Statistics. Analysis of data. You should understand (a) the techniques applied for the analysis, (b) the data and its provenance, and (c) the person’s perspective who did the analysis.
- Programming. Much like sewing is a needle pulling thread, I would say programming is a person writing directions.
- AI. A much-abused term for whenever we think computers are performing cleverly. Note, our definition of clever changes the more we see something exhibited.
- Machine learning. To me, this is the most interesting definition of the group, and I equate AI and ML now, as it is the state of the art, i.e., things like expert systems aren’t clever anymore. The most important point on ML is that you can show how the machine is actually learning. The key differentiator here is not that the machine is programmed to make a different decision given different context, but that it would actually give a new answer based on learning from data.
P When it comes to building smart software, how much can we learn from the human brain?
J We are not at the point where we can emulate the human brain both in terms of what we actually know about the human brain and the current state of hardware and software capabilities. That said, we can be inspired by what we are learning about the human brain and how it works.
Our work at Nara Logics is inspired by how the neurons in our brain decide to connect, and specifically on how a connection is strengthened or weakened. We don’t know exactly how our brain stores information yet, but we do know how the neurons fire to retrieve information for a decision. To give you two biological examples, we are inspired by the connectome and inhibitory neurons for how to connect data for intelligence, learning and retrieval, i.e., recommendations and decision support.
P Is AI going to save us or kill us? Is there a better way to think about it?
J Well, I referenced above how we didn’t respond to climate change soon enough, making our problem much harder—if not impossible—to solve for this planet. So, likely from that you won’t be surprised that my answer is “yes, it has the potential to do both.” And also, “yes, there are better ways to think about it.”
The best way to think about it is that we are in the driver’s seat. We need to take this seriously, but that doesn’t mean we can control it. I liken it to raising a child. We can’t lock it in a room (regulation) and we can’t send it to bars (Tay) as we raise it. We need to take it out into the world. Test it. Understand it. Not expect it to be like us.
I ask folks to think of AI like artificial light: It enables us to do some things better and some things we couldn’t do before, but it didn’t replace the sun. Your kids don’t replace you, but they do carry forth your values, including, sometimes, the ones you don’t want them to.
PWill access to machine learning create a new digital divide of haves and have-nots? If so, how can we avoid it?
While it could happen, the likelihood is low regarding AI creating a new digital divide. It could happen due to regulation, but that would only hurt the group that’s regulated: The rest of the world would move forward.
I’m more concerned with it impacting the economic divide with more job loss. There will be job displacement, and as we have shown in the U.S., if we don’t focus on this, people are not able to find work. As Tim O’Reilly has said, if we let this impact jobs, it is due to our own inability to focus on augmenting AI and also our ability to transfer work to other problems, i.e., we aren’t running out of problems, so we shouldn’t run out of jobs.
I’ve said similar things: This is all about what we value. As soon as a job is automated by AI, then the product of that job becomes a commodity, and we need to direct that value to the next space that needs higher value effort.
We can do that. I know we can.