Responsible AI at Accenture: In Conversation with Marisa Tricarico

AI4ALL Team
AI4ALL
Published in
8 min readApr 13, 2021

Accenture’s partnership with AI4ALL gives emerging leaders exposure to Responsible AI in practice.

A Washington, DC high school classroom using AI4ALL’s Open Learning curriculum. Conversations with Accenture’s Responsible AI team helped to inform aspects of the Open Learning curriculum. (Photo credit: Jordan Budisantoso, 2019)

The field of AI is changing rapidly, making the need for responsible AI greater than ever. While only 18% of data science students reported learning about ethics in a recent industry survey, examples of AI products with unintended negative consequences continue to grow. Marisa Tricarico, the North America Practice Lead for Responsible AI at Accenture, has a unique perspective on the rapid expansion of this field, as she works with a growing roster of Accenture clients as they develop and deploy AI. Marisa and Accenture’s work intersects with AI4ALL’s work to train the next generation of responsible AI leaders as well.

As an AI4ALL partner, Accenture is not only a financial supporter of AI4ALL, but also provides key support in a variety of hands-on ways. When AI4ALL assembled a new AI and ethics curriculum in early 2020, we sought input from Accenture and others who are helping to shape the field. Marisa and others working in responsible AI at Accenture advised AI4ALL before we developed the curriculum, providing insights into how corporations that use or create AI are thinking about AI ethics. The discussion ranged from how Accenture sees a career in AI ethics changing over the next 3–5 years, to what job titles are related to responsible AI, to which courses, majors, and other pathways could lead to an AI ethics career. Concepts from this discussion are incorporated into all AI4ALL programs, especially the stand-alone high school curriculum AI & Ethics Byte of AI, which gives students the opportunity to role-play as an AI company CEO thinking through the ethical implications of their products.

Since AI4ALL and Accenture’s relationship began in 2018, a number of Accenture employees have volunteered at AI4ALL programs. Volunteers have provided mentorship, given feedback to alumni on their resumes, and served as guest speakers at AI4ALL Summer Programs nationwide, talking about their unique paths, how they use AI in their roles, and giving insight into the different types of careers that intersect with AI.

Read on to learn more about Marisa’s role as Responsible AI Lead at Accenture, how her extensive experience in the life sciences gives her a unique perspective on responsible AI, and what career and life advice has shaped her.

As told to Nicole Halmi of AI4ALL by Marisa Tricarico.

AI4ALL: Could you tell us about what you do in your current role at Accenture?

Marisa Tricarico: I’m the North America Practice Lead for Responsible AI at Accenture. I work with our Global Lead on how the United States’ responsible AI practice is developing and how Accenture can help our clients mature their responsible and ethical use of AI and data. We focus on guiding our clients to more safely scale their use of AI, and build a culture of confidence within their organizations. This involves looking at technical aspects like the type of data being used, the logic inside the models, or operational/organizational aspects like the larger strategic goal of the use of AI and its governance.

Earlier in your career, you worked in life sciences and healthcare. How does that intersect with your current work in responsible AI?

I began my career in academic neuroscience and then later pharmaceutical strategy consulting. I’ve always been a healthcare and life sciences person, but over time, clients needed more help thinking through how to make data-driven, strategic decisions. That led to my involvement in more analytics strategy projects for our clients. My involvement in analytics strategy eventually led me to advising our clients on the strategic use of AI. With the growing use of AI comes the question of how to use AI in a responsible and ethical way, considering the downstream effects of your AI system.

You studied cognitive neuroscience in undergrad and then pursued an MBA. Can you talk about your academic journey and what got you interested in cognitive neuroscience?

I was always interested in the biological sciences, but especially neuroscience. Within neuroscience, I was especially interested in plasticity, which is really another way to talk about learning or changing a system so that it can learn. In college, there were not many ways for me to focus specifically on plasticity. At the time, the only way for me to get exposure to those concepts was to join some PhD classes and to pursue lab work, so I did both. I originally planned to apply for an MD/PhD program related to the lab work I was doing. In preparation for that, I did a lot of informational interviews, which I would strongly encourage for anyone who is looking to understand what career path they want to take. During my informational interview process, I discovered that I really liked interacting with people and that I wanted to do something more on the business side of healthcare and life sciences. That realization drove me to stop my MD/PhD application process and start learning about ways to break into strategy and healthcare consulting.

Was there anything when you were growing up that made you particularly interested in the biological sciences? Or was it an interest that emerged over time naturally?

My favorite subjects in school were always biology and English. I’d like to point out a false dichotomy that is often repeated: the idea that people are either good at science and math, or they’re good at English and the humanities. I think we do a huge disservice to young people by bifurcating those interests. You can be good at various things that don’t “go together” in other people’s minds.

What are some of the things that people should be doing now to create a positive future for AI?

No matter where it’s being discussed or employed, the responsible use of AI is going to require a multidisciplinary group of people.

You need all sorts of people with all sorts of skills and backgrounds at the table to ensure that you’re really identifying and mitigating unintended consequences.

One example of a skillset that will be broadly useful is having an understanding of basic statistics — for example, having a general understanding of statistical significance or of what skewed data means. Understanding these ideas will help you join the conversation on what makes a good AI model, what you’re trying to measure, and how good the model is at measuring that.

Are you seeing any structures emerging that are allowing more people to participate in deciding how AI should be used?

Yes. That’s a lot of what we help our clients with at Accenture. A lot of times, the client has an idea of who needs to be at the table, but we help them to think through that decision-making process. By having a certain “roundness” to the people at your table, you will be better at avoiding unintended consequences because you have more sources of insight into how things could go right or wrong and what to do to fix it.

This isn’t about scaring people; this is about making AI better for everyone by making AI everyone’s business.

Who were your role models growing up? And do you have any role models now?

I have academic and career role models, and I have personal role models who help me think through what kind of human being I want to be.

My dad is a personal role model of mine. Over my life, he’s taught me about what it means to be a kind person in so many ways. For example, he is a nurse anesthetist. When I was thinking about going to medical school, he showed me what compassionate care looks like, how to be a good nurse, and how to show your patients kindness, which really does improve their care and outcomes.

An important academic role model for me was the late Dr. Rahul Desikan. He’s a former professor at UC San Diego and UC San Francisco, and he was my mentor when I worked at Mass General Hospital and Boston University. He was not only brilliant but also a very kind, humble, and funny person. We perhaps initially got along because he had pursued degrees in both neuroscience and the humanities, going against the stereotype that you can only be good at quantitative or qualitative subjects and not both. He saw my interest in plasticity and showed me how I could start taking PhD classes in undergrad. That kind of early mentorship really did a lot for me. He went on to make really important discoveries in neuroanatomy, Alzheimer’s disease, and amyotrophic lateral sclerosis (ALS), until tragically, he succumbed to ALS himself. He should be celebrated for his brilliance and mentorship.

What are some of the most exciting moments in your work?

The most exciting is when I’m having calls with larger Accenture teams all over the globe who are interested in responsible AI. They’re so passionate and eager to get involved and are trying to understand where to start with responsible AI. I’m energized by the enthusiasm that they bring and their willingness to learn and roll up their sleeves.

What advice do you have for young people who are interested in AI who might just be getting started in their academic or career journey?

I would recommend that they read about AI to start to demystify the jargon and to understand key issues like, what are unintended consequences? What do we mean when we talk about scaling a system? What do we mean when we talk about having multidisciplinary stakeholders at the table?

There are so many books out there now about AI that it’s easier to find one that suits your specific interests. A great piece of advice I got from my third-grade teacher was that if you’re reading for pleasure, never force yourself to finish a book you’re not enjoying. This idea plays into neuroscience. If you keep forcing yourself to read stuff you’re not interested in, you’ll inadvertently condition your brain to dislike reading. Instead, read books you enjoy, and teach your brain that it likes reading.

About Marisa

Marisa Tricarico is the North America Lead of Accenture’s Responsible AI practice a member of Accenture Applied Intelligence Strategy. She has spent her career working with cutting-edge biotech, pharma, and AI companies.

Marisa specializes in helping C-level leaders set their vision, strategies, and expansion plans for AI. She also has deep expertise identifying and vetting commercial opportunities, as well as fostering innovation at large institutions. She has worked with both startups and global companies spanning pharmaceuticals, genomics, molecular diagnostics, hospitals, and digital health.

Prior to Accenture, Marisa held positions as a hedge fund analyst focused on genomics, as an early-stage venture capital investor focused on university technologies, and began her career in academic neuroscience research with Massachusetts General Hospital.

Marisa graduated from Boston University with a BA in Neuroscience and holds dual MBAs from NYU and HEC Paris.

--

--

AI4ALL Team
AI4ALL

AI4ALL is a US nonprofit working to increase diversity and inclusion in artificial intelligence.