Enterprise AI, Diversity, Trust, Ethics, Humans in AI with Beena Ammanath | Stories in AI

Ganesh Padmanabhan
StoriesinAI
Published in
25 min readAug 12, 2021

In the latest episode of @StoriesinAI, I spoke with Beena Ammanath. Beena is the executive director for Deloitte AI Institute and also leads trustworthy AI for Deloitte. Beena is an award winning senior executive with extensive global experience in AI and digital transformation journeys spanning across e commerce, finance, marketing, Telecom, retail, software, products, services and industrial domains with companies like HP enterprise, GE, Thomson Reuters, British Telecom, Bank of America, e trade, and a number of Silicon Valley startups. She’s also the founder of a nonprofit called Humans for AI. A world-recognized thought leader in the space, she also serves on the board of Cal Poly College of Engineering, and has been the board member and advisor for several startups. Beena thrives on envisioning and architecting how data, artificial intelligence and technology in general can make our world better and an easier place for all humans.

I had a great conversation and we touched upon many interesting topics from the state of the market with in AI, the spread of value creation with AI across multiple industries, what organizations that are leading the charge with AI and technology do to actually further their journeys in AI, and the best practices you can learn from them. We talked about ethics and trust with respect to AI. And we talked about expanding the human opportunity pool with technologies like AI. I had a fascinating time having a great conversation with Beena and I hope you enjoy it too.

Ganesh: Beena, welcome to stories in AI. How are you today?

Beena: I’m great. Thank you so much for having me Ganesh. How are you doing?

Ganesh: I’m doing fantastic. It’s sunny in Austin. Although I must say it’s beginning to feel like Seattle this year with all the rain we’re getting. You guys are having a heatwave too right? You’re printing. Yes, exes.

Beena: Yes, it is. It was hot, but this week is surprisingly cold again; I actually had to pull out sweater. I was just thinking the other day about climate change, and how the seasons have evolved and are evolving right before our eyes.

Ganesh: Right in front of our eyes, much sooner than we all expected it to be. Ironically, I was supposed to record a climate technologist this morning. She lives in Netherlands, and she texted me this morning saying, “Hey, there’s a lot of flooding, over a 100 people are missing or dead because the Benelux region is undergoing some unprecedented flooding. So we had to move that recording schedule to another time. But this is unfolding much faster than we all expected. And I’m hoping that at some point, all we technologist can get together and put AI to work harder to solve these problems. So thanks again for taking the time. Thank you for being here today. I know you have a very busy schedule. I’m going to get right to it if you’re okay.

Beena: Great. Let’s do it.

Ganesh: Okay, so Beena tell us your personal story, let’s start with that. Tell us how and why you got into AI.

Beena: So my personal journey I would say is a little bit too common. I studied computer science for my bachelor’s and master’s. So both undergrad and grad is in computer science. And this was way back in the early 90s. There, AI was one of the subjects we studied, but it was all in theory, we certainly didn’t have access to the computer or data. I can’t believe in my own lifetime I’m seeing this technology becoming real.

So when I started my career, it was more as a programmer, as a software developer, and a software engineer. But I always anchored myself on data. And I’ll tell you Ganesh, I certainly didn’t plan it out that way. But I have experience across different industries. I’ve worked in financial services with banks and trading companies. I’ve worked in industrial companies, from aviation to power and healthcare, and also in high tech companies. I kept myself anchored in data.

So I had the traditional career of starting from database developer to DBA, to managing data teams. And then came the phase of bi and data warehousing. So ran the BI team at ETrade. And then as AI started becoming real, as machine learning started taking its own shape and driving real business value, it was a natural evolution into, “How do we use big data? How can we get insights from it and drive business value?” That’s how I got into AI. And I feel that I’ve been so fortunate that I saw this technology taking shape right in front of my eyes and have been part of that journey. I feel grateful that even though I didn’t plan my career this way, it’s a career mosaic where I have experience across all these different industries, which gives me a unique perspective on how AI is impacting different industries. And now with my role at Deloitte, I get a front view seat into how this AI is shaping different industries, and really look ahead, not only from the value creation, but also how this could go wrong. And how do we actually make it successful for everybody and address some of the challenges that’s core to humanity’s survival? Just like you were saying earlier, how do we get AI to work to solve for climate change?

Ganesh: It’s fascinating. I mean, you and I can’t wait to dig into all of that. And but you still had the foresight to actually pretty much bet your career on data in terms of saying, “Look, computation and data are so fundamental, and it’s going to evolve over time, over decades, and hundreds of years, centuries, if you will, and having and building that foundation is pretty amazing. So you did have the foresight to do that.

Let’s start. Tell us a little bit about your role at Deloitte. And as the executive director for the AI Institute, what does the institute do? Just give us a little brief and also a little bit of color on what your typical day is like.

Beena: There is no typical day Ganesh, I can tell you that the only typical part of it is sitting on zoom calls and doing great, amazing, interesting work. So what do I do at Deloitte AI institute? The Deloitte AI Institute was set up with a vision of something that I struggled with when I was leading AI at these large companies. The AI ecosystem is very robust. It’s continuously evolving. The research phase is not yet over while the technology is already being used in real world. There are startups, there are material players, there are regulations and policies coming into play, there’s so much happening in the AI space. And what we’re doing with the AI Institute is to really connect across all these different dimensions of AI, but from an applied AI lens. Meaning that, how are financial services companies using AI? What are the top applications? What are the best practices? What are the risks that they’re facing? So being able to look at AI from a very applied perspective is what we do with Deloitte AI Institute. We also look at more of the fuzzy areas around ethics, and how to get more diversity and inclusion in AI. So there are a lot of different factors that go into succeeding and with scaling AI in enterprises, and Deloitte AI Institute is positioned to become that stop to get all the information you need from an applied AI lens. I also focus on our AI ethics perspective, because as I said, that has been at the forefront of what I’ve been thinking of in the past few years; it is really the risks involved with AI. And Ganesh I’ll give you this perspective on how I think about where we are with AI today.

If you think about where we are with AI, there are three parallel streams so to speak. One is a core AI research. Whether it’s the next wave of deep learning or neural net facial recognition, these technologies are still being researched and still being developed in academia or research groups. That’s one stream that’s accelerating at its pace. Then there’s a second stream where this technology which is still being developed, is being used in the real world, because you can see business value that you can get even from this not so fully mature technology. So there is value that companies are getting today by using a technology that’s still being developed. That’s the second stream. And then the third stream. The second stream focuses on value. But what are the negative implications where things like ethics and policy and regulations come into play? Whether its impact to an individual’s health or challenges with manipulating data. There is a third stream which is really looking at the negative impacts of this technology which is accelerating at its own pace. So if you think about it at this high level, the crude analogy that I give is, we’re still developing the car engine, it is still in development, it’s still fine tuning it. But we’ve already put an auto body around it, and I’m driving that car because it is still getting us faster from point A to point B. But in the third stream, we don’t have the speed limits defined. We don’t have the roads structured to speed up with this technology. So each one of those is still evolving.

And so for me, it’s just fascinating to see how it’s shaping out and how we really need to not only talk about value creation with the technology, but the ways in which it could go wrong and put in the guardrails ahead of time. So that’s the focus around AI ethics and trustworthy AI, which is an area I focus on as well at Deloitte.

So a typical day goes right from whether it is talking to AI startups, or looking at what’s going on in AI research and how it’s applicable in the real world, to looking at the ethical aspects on the kind of teams do we need and what an ethicist role looks like. It is really an interesting time to be in a role like this where I get this preview across all the industries and be part of this rich AI ecosystem.

Ganesh: No, it’s fascinating. I mean, I’m jealous. And like you said, if you really think about it, the whole aspect that it’s such a dynamic environment, with all these very well defined streams of work happening, research where you’re trying to create new ways of doing things, and then there is the practical or the applied AI space wherein you’re trying to generate value out of it. And then you always have to start thinking about the risks and the impact of what you’re doing in the broader spectrum. In many ways, it’s like the whole notion of machine learning itself wherein it’s never done. In traditional software, you can build a product and throw out, but in machine learning, it’s an iterative science. And it’s fascinating that you say that even the industry itself is in effect, a reflection of the kind of science we’re actually dealing with here. So it is pretty fascinating. And thanks for laying that out and where AI is today. So let me go a little bit deeper on that and ask you, what is real in AI? What is hype and what is science fiction today? So go a little deeper.

Beena: Yes. So AI is driving real value in businesses, but it is nowhere close to where it is taking over the entire business and running on its own. So we did this annual survey. We’re actually in the midst of a fourth survey right now, where we’re looking at state of AI, and we saw the companies that are using AI, and they fall in three different categories.

The first category is really where they’re just beginning to think about AI and how it fits in, doing a few PLCs; just very early in their journey. And then the last one is where companies have AI embedded in and have a number of AI solutions in production and are more on that mature side. But the largest group is in the middle where they are still doing PLCs, and they have a few AI solutions in production. Where we think the real world is Ganesh is, you’re seeing the narrow applications of AI where you can do basic automation, and using leveraging machine learning and AI, whether it is to develop a chatbot to drive better customer engagement or it is just fitting into existing processes. What is not real is AGI or the hype around robots taking over the world. I think we are still very far from it. And the reason I this is, if you look at that first stream of research that has happened, I don’t think we have the models or the technology ready today to do any kind of artificial general intelligence. And an example that I give which is more of the real world like you and I talking here today, or if there are two people talking today, there is some AI enabling us to do this conversation which could do potentially real time translation, but that’s about it. It does a very narrow, specific job. So anything within an organization which can be done with less than 30 seconds of your brain power will be automated and is automated today. And we are looking at how to expand out from that 30 seconds brain power to how to automate more of the five minute work. So that’s the best way that I would describe where we are with AI today.

Ganesh: You’re so right, I think it’s a chicken and egg problem. Or it’s like you had Hollywood pretty much defining the narrative for AI in the industry, which helped two things. One, it is spreading fear on one end where people are thinking that Skynet and robots are taking over. This creates a doomsday scenario in people’s minds when they think of AI. But on the other hand, it also popularized and made it cool for scientists and researchers to get more into AI and say, “How do I really apply this technology to create a digital human?” The AGI conversation is the moonshot of AI, the true artificial general intelligence. And just like the moon shot, you can’t go to the moon to do grocery shopping. But the fact that you actually went to the moon created a wave of innovation that allowed so many other practical applications across multiple industries. That’s the power of AGI or the focus on AGI. But it’s exactly what I’m hearing and seeing too, which is, the level of knowledge work that can be automated is still fairly limited and it’s very narrow. The goal is, we actually improve that and move it in the notion of, “How do you augment the human being?” So you make their life a lot more richer, right? You make it more fulfilling for them to do the things they love, and it’s more innately human and so forth. It is fascinating to get your perspective on this. So on the industry impacts as well, what industry is actually seeing more impact than the other? For AI in general, we do see that the large, and the big tech organizations are obviously getting lopsided benefits. They started first, they could hire more people, they have more data. But the other industries are catching up, give us a color of that. What do you see across the industry? What’s working? Where is it adopted more versus not?

Beena: Yeah, I think where we’re seeing most traction and naturally so is because there has been a lot of data already captured. So banking, financial industries, and also life sciences and healthcare more on the pharma side, where there has been a need for data and it was captured, it was structured and it was ready for AI to become real. We do not see as much traction obviously in say manufacturing companies or industrial companies where there was no need to capture that data. There was no incentive to capture that data beyond just for transactional purposes. I’ll give you an example from my own experience. What I’ve seen is, for an industry like aviation, when we started looking at doing predictive maintenance on jet engines, we started looking at the engine data, and it is just what’s on that black box, which is a really small amount of data. And at that time, data for engines manufactured a few decades ago was like 30 to 40 sensors captured during three points in any flight, once during takeoff, once when you’re in launch, and once during landing. That was it. And once AI started becoming real and we started seeing the need to go back to the design to embed additional sensors in the engine to capture the data that’s needed for AI. So I think we are seeing where there hasn’t been a traditional data need, they are obviously lagging a little bit. But they could catch up, there are new engines coming up in this specific case. And also the need to tie in with existing legacy data sets. You know as well as I do that AI is not just about big data, you need that small data sets for the predictive maintenance of an engine. You need those service records. And for the service records, when you start looking at it, they exist, but it is handwritten on a piece of paper, which is how it’s always been done. So it’s not so much about AI at that point. It’s about data cleansing, getting all those service records digitized, which is a whole exercise on its own. I think that sometimes we get lost a little bit whilst just focusing on big data. But when you look at this traditional industries, there’s a lot of opportunity, but there’s a lot of foundational work that needs to be done. And that’s what we see happening in the real world as well. When you look at something like service records, you realize it’s about changing the way the technician does the work or captures information. So that means you need to digitize the entire process and operations. It’s not just about digitizing existing records, but how you get future records and the additional data points you need to capture from the service record that was not there in the earlier form. Ganesh, the best way to think about it is, industries that have existed say 25/30 years before, versus industries that came into existence in the past 25/30 years, which is primarily where big tech falls in. But there are industries that have existed for centuries and decades before that, which need a lot of foundational data work to be done to make AI successful.

Ganesh: Got it. Thank you for that perspective. And I think that’s amazing. You summed it up very well. The newer industries, which started with the foundation of collecting a data first approach, are obviously adopting or being in a position to adopt AI and technologies to power a lot of their transformational journeys. And then the traditional historic industries who didn’t really have the incentive to collect or look at data before are the ones who are the laggards here. Also, there’s a lot of potential energy latent in there that you can actually leverage. And those are industries that will really I mean, when you move from just a digital landscape to bridging digital and physical worlds together, you unlock a whole bunch of value out in the open. Thank you for that perspective. You know, one of the things I’ve been really inspired by the work you do is highlighting the woman and the underrepresented minority’s efforts and contributions in AI, including your humans for AI work, and the Deloitte Institute’s report on women in AI. Tell us two things. One, is there a diversity problem in AI? And two, what is the impact of not doing anything about it?

Beena: Yeah. There is a diversity problem in general, in tech and in STEM, and at every level. It’s not just there when you’re getting started in your career, but right up to the boardroom. It’s a well-established fact that there’s a diversity problem. And I’ve always been a supporter and tried to get more women and underrepresented minorities to be part of technology and stem. But once AI started becoming real Ganesh, and I was setting up my first data science team, I distinctly remember I was in that phase of hiring data scientists and data engineers. And I really struggled to find women, especially people of color; it tended to be a very homogenous group. And sometimes even though I was the leader, I was the only woman at the table. It got to a point where I realized that with broader tech, it’s a problem. But with AI, there is more urgency around getting more diversity into AI. And the reason being that we hear a lot about bias in AI, about fairness, and ethics. The best way to solve it is to have a diverse AI team. The challenge is, we all are biased, but that’s what makes humans beautiful as well. Like if you and I were exactly the same and behaved exactly the same, that would make us really boring. But it’s also the human characteristic. It’s based on our prior experiences, our educational background, and cultural background. There is so much that makes us individualistic. But we don’t want those biases to go into the AI systems that we build. So no matter how great I think you are, I don’t think all your biases should be encoded and scaled out to all of humanity and vice versa. So I think the best way to solve for it is by bringing diversity of thought. It just happens that our gender diversity is the largest gap right now. Women tend to be the largest minority group. And so let’s start with getting the gender diversity right in AI. But the reason why I started Humans for AI was, I was really looking at whether the organizations were focused on getting more diversity into AI. And so that was one data point. The other one I realized is AI actually provides us a huge opportunity to get more diversity into AI. Because unlike traditional software, you don’t need to be a programmer. Not everybody in a data science team needs to have a PhD in machine learning or AI, you still need designers, you need testers, you need domain experts, and that’s where I think the biggest opportunity we have is. The data scientists themselves can be a homogenous group, but let’s surround them with diversity by making sure that the subject matter experts, product managers, project managers, and designers are as diverse as proper as possible. And that way, we’ll have a robust, diverse AI team. And that’s what prompted me to start Humans for AI when I realized that there was actually an opportunity we could solve, and not everybody needed to have that deep technical knowledge to be part of an AI team. So what humans for AI does is, we raise awareness, but we also provide basic AI literacy by partnering with other nonprofits who are focused on providing education to URMs and women’s groups. And that way we can get AI in front of people who otherwise would not. I would love to share a story that happened recently Ganesh which actually got me goosebumps. So we started an alliance with UC Berkeley over here. And it is really about UC Berkeley providing basic AI literacy training to a cohort of women and URMs. And we were able to partner with an organization that finds and supports human trafficking victims. Most human trafficking victims are victims when they are in middle and high school. And once they’re rescued, this organization actually provides them the ability to complete high school and get to college.

We got this cohort to come in and learn about AI because they definitely use AI. And they felt so empowered after that two-day session because nobody looks at these groups of people who are going to get left behind. And to have them sit through these sessions and understand how AI is being used in companies and get that basic understanding of what machine learning means and what NLP is? Being able to read a news article about AI and understanding that was very empowering.

So I think what we want to do is really make sure that nobody gets left behind in the AI journey. Whether you’re part of the AI team or not, even as a consumer, you should be very well educated on AI capabilities, it should not be just people like you and me who have that privilege to be at the forefront with this. It should be everybody. So it kind of evolved into solving for the diversity problem, but really making sure nobody gets left behind.

Ganesh: That’s such a such a beautiful story. Thanks for sharing that Beena. And it’s fascinating. The true goal for all of us who share this common goal with AI is how do you really use AI as a human augmenter than to replace humans? My favorite analogy is, I don’t want terminators, I want Iron Man where I am Tony Stark, and I put on an iron armor which is powered by AI and robotics. And I just become a superhuman. And to really share that story where the groups that are potentially left behind, even given a glimpse of this powerful technology and how it can empower everybody, and inspiring them to take action. I’m hoping a lot of them will get into data science or get into working in AI in this process and stuff too. That’s such a beautiful story you shared. And on top of that, I think AI also provides you that opportunity. Because AI is unlike other technologies, it is a team sport, you need data, algorithms and design. There are so many elements that need to come together. So unlike Java development, this now provides you with an opportunity to bring a larger group of people together.

Beena: The biggest leaders in the that, where you need subject matter experts, whether it is data tagging, labeling that needs to be done for AI, or there is only so much just a pure technical data science team can do in using AI for healthcare products or for education. If you’re looking at building an AI product for education, as data scientists, we can barely scratch the surface, we can automate the small things. But you need an educator involved to really grow AI as well to think about the next AI product. You need an educator involved, it cannot be done without the educator. So can we make that educator to be as diverse? So that’s an opportunity.

Ganesh: That’s a huge opportunity. Just a digression, I’ve been thinking about this exact problem of when we talk about human centered AI and stuff, what is becoming increasingly clear is, across data, algorithms, software infrastructure, and subject matter expertise, the subject matter expertise is becoming more critical in this process. And to truly scale the value of AI, that’s where you need to focus on and that’s available in an abundance. And that’s the amazing opportunity in front of us. Yeah, fascinating. You touched a little bit upon trust and ethics in AI. Explore that a little bit for me. What is ethics in AI? Is it my ethics, is it your ethics, is it one common code of ethics? I mean, what is ethics in AI?

Beena: Yeah, so I tend to put it in that third stream of the negative consequences of AI. And right now, we call it ethics, and there are policies and regulations. But for me, ethics is different based on the industry. And even within an industry, based on the sector or the use case. So think about life sciences and healthcare as a whole. When we talk about ethics, a lot of literature that you see out there very quickly goes down this path of bias and fairness. But the reality is that if you are a manufacturing plant, or if you’re an industrial company where you don’t have that direct contact with a consumer, fairness and bias may not be as important as something as the reliability of the algorithms or the security of the algorithms. So I think ethics is a very crucial discussion, but it also has to be more nuanced. I’ve certainly been part of a number of sessions talking about it. And we’ve developed a framework here at Deloitte that we are working on operationalizing. So ethics has to be more nuanced than just fairness and bias. I think there are different dimensions to it right from who is accountable for an AI system, if it goes wrong, who is reliable? Like you briefly touched on, the AI solution that you develop and deploy, once it’s out in the world, it’s still learning and evolving. That’s an implication. Is there an implication to human health that needs to be addressed, the privacy aspect?

So there are different dimensions to the discussion on ethics. And what I think is, it needs to be solved not at the technology level. At the highest level, any company needs to define what ethics mean for the company. Because it cannot be at the human level. Because we all have different definitions of what ethics means for us. But as an organization, it’s important to define upfront what technology ethics is, and AI ethics principles for the organization. And then it needs to be looked at from a technology perspective on how to put in the guardrails within the technology and embed it within the technology so that your chatbot doesn’t go racist, for example.

And there is a process aspect to it where you’re actually putting in control mechanisms, whether it’s into your existing processes or new processes. For example, we’ve all done this project management and agile methodology where we focus on say we’re building a new AI solution, what’s the ROI in one year or five years, you define that upfront, right? What is the value that you can get? The process change could be as simple as do a checkpoint on having the entire team think through the ways in which this could go wrong. Proactively adding a simple step, and having that discussion on the other ways this could go wrong and address it during your engineering process. So that’s the process and governance aspect of it.

And then there’s the third one; it is the people. We hear a lot about getting in an ethicist or an AI ethics officer, but really, ethics should be everybody’s job. So how do you provide? Through trainings and through learning and development. How do you make sure that everybody in the company understands what AI ethics mean for us? And how do you detect it? How do you make somebody aware? And here’s a reason why it should be beyond the engineering department Ganesh. The reason is that there might be somebody in your supporting function in your CFO group: an accountant, who is today evaluating an AI vendor who has an accounting software solution. That person needs to understand what they should be looking at from an ethics perspective in that software evaluation, not just about the functionality of it. How has the software being developed to help from a privacy perspective? Having that basic understanding is important. So education should be a big part of this solution for AI ethics.

Ganesh: You’re laying out a very comprehensive and practical roadmap. Even in the earlier conversation when we were talking about biases and things with underrepresented folks within the AI development and the operational team, there’s a lot of pointing fingers that data is not great, or the data has bias in it and stuff like that. But it’s more than just one thing. It’s the entire ecosystem of elements that need to come together to deliver value. And that’s where I think this framework that you laid out to look at ethics across the spectrum is very critical. Now we’re running a little bit out of time, but bring it home for me. What are some practical guidance or advice that you have for organizations who are looking to scale their AI journeys, and for those who haven’t even started yet, so they’re about to start the journeys. Give us some practical recommendations.

Beena: I think for organizations starting and who are very early on in their journey, my advice is, don’t make your first hire a data scientist, make it somebody within your organization and train that person. Depending on where you want to bring in AI, make that person AI literate and AI fluid, Start with the business problem you want to solve, and then focus on bringing in the technology upfront. So I think starting with business problems is the best way to succeed with AI.

For the companies that are ahead in their journey, I would say focus on this third stream that I was talking about: the risk, governance, and ethical impacts. As you scale out, make sure ethics is embedded not only in your development process, but also in your ml ops process, because that software is continuously evolving and changing. So it is time to also think about the risks and address them proactively. You’ve gotten a lot of value from AI, but also think about how things could go wrong and address it today as opposed to waiting for when it actually goes wrong and then reacting.

Ganesh: What is one thing you would like everybody in the AI industry to be working on to solve right now?

Beena: I really hope we move beyond just value creation for organizations to also value creation for humanity. I think all the triggers are there and I’m very hopeful that we’re going to see more brain power being put into solving for some of the largest problems that we as humanity face, whether it’s climate change and sustainability, human trafficking, solving for cancer, solving for poverty and world hunger. There are such big problems in front of us. Now that we’ve seen AI succeeding in companies, let’s come together to solve for some of humanity’s largest problems and figure it out, we can do it.

Ganesh: Very well said Beena. I also think it’s an incentive problem. We have to structure it in a way that you actually provide the right incentives for more innovators, entrepreneurs and companies to go focus on that.

Beena: I’m certainly seeing more of that happening now than ever before, whether it is the COVID pandemic which made us all pause and think, or things like climate change which we’re dealing with today. But I’m very optimistic about it, I think there will be more incentives than ever before that direct us in that path.

Ganesh: Absolutely. Beena I have some quick, rapid fire questions, if you will. So give me a story of how we will be interacting as humans with AI in say, 10 years. I used to ask this question in a 100 years, and it was too far out and people were saying we’ll be going to space, So in ten years, how do you see the interaction?

Beena: So for example, if we were having this conversation 10 years from now, I would probably be seeing a 3d image of you, it might feel more human. You might be talking in a completely different language, and I might be talking in a completely different language. But without any delay, we will understand each other because AI is doing real time translation. AI is also nudging us, and providing us with additional data points, which we don’t have to necessarily keep in our memory, just like we don’t remember our phone books anymore. I don’t even remember my son’s phone number. But if we would have all the data as the conversation is going on at our fingertips that I could share with you, then you would have it too. So I think AI truly augmenting us to have conversation in different languages, and different contexts whilst enabling us to communicate better. And this is a very specific example, just based on what we’re doing right now.

Ganesh: Awesome. I have a personal question for you. What is one personal practice that you do that keeps you at the top of your game?

Beena: Reading. I’m always in the middle of three to five different books at a time. And as you can see behind me, I read real physical books. It helps me keep track of what I’m reading. Also, I balanced it between fictional and non-fictional, and inspiring and light. So I’m always in the middle of reading different books and different perspectives. And I think that really helps me bring different perspectives to my game.

Ganesh: Give me a give me a book recommendation.

Beena: There are so many, it depends on what you like. I love reading crime fiction, and thrillers. I also like reading biographies. I just finished reading Satya’s book, and I absolutely loved that. It was one of those nonfiction books which usually takes a long time, but I couldn’t keep it down. So Satya Nadella’s biography. Go read that.

Ganesh: When he launched it, I was there in Seattle in Redmond. There was a CIO or CEO event and I was part of the end 12 portfolio companies. So I have an autographed copy of the hit refresh book from Satya Nadella.

Beena: Now I’m jealous, Ganesh.

Ganesh: Oh, that’s awesome. Thank you for that. How can the viewers and listeners get in touch with you on the internet?

Beena: So I am on most social media. If you Google my name, I know that I’m the only Beena Ammanath in the world. So there will not be any duplicate links that you get. Connect with me on Twitter and LinkedIn. I would love to engage with your audience.

Ganesh: That is awesome. Beena, thank you so much for taking the time today. This was a fascinating conversation. I really enjoyed it.

Beena: Ganesh thank you so much for having me and good luck with everything that you’re doing.

--

--

Ganesh Padmanabhan
StoriesinAI

#AI and Healthcare. CEO @ Autonomize, @StoriesinAI . Scaled Data/AI biz to $B+ , 2x startups, ex-GM @DellTech . On life, startups & impact, sharing & learning