Noelle Silver Discussed Building Responsible Applied AI At Scale
How can enterprises build responsible AI models that serve mass audiences?
Noelle Silver, an award-winning technologist and currently the AI executive at IBM, had an inspirational keynote speech at Worldwide AI Webinar. Read on to see what she has to say about building responsible applied AI at scale and empowering inclusion with technology.
Check out her whole presentation on our website and Youtube channel.
The heart of responsible AI
Noelle summed up the essence of responsible AI quite nicely as “someone who cares about responsible development of technology solutions” and a responsible engineer. Responsible AI is asking more questions. She also emphasized the importance of asking why such a model came to life and where they got the data and how they trained the model.
Especially when we are having powerful applied AI models at hand to leverage computer vision or real-time translation or decision-making knowledge mining without having to actually know the language it was written in and create AI-enabled and AI-powered tools, it’s even more crucial to ask questions:
- How are we leveraging AI?
- How did we get these results?
- Whose data are we using?
- Do we have permission to use that data?
- Does it represent the audience that consumers might presume it represents?
4 principles to build responsible AI models
Having inclusive engineering teams:
As explained by Noelle, inclusive doesn’t just mean ethnicity. Rather it’s more about creating a symphony of people of experiences and neurodiversity.
Noelle stressed that multi-modality is where the power is. Combining multiple sensory and communicative modes to build an inclusive solution that can make a huge impact should be the ultimate goal.
Noelle also encouraged the audience to not only think of a computer vision model or a natural language model as an only entry point into a solution but think multimodal.
Ask the question:
“How do I combine all of these models together to create a richness in the solution that many people have never seen before?”
Noelle claimed that as a data scientist, she focused on responsibility. She took it upon herself to make sure when she build software that scales to hundreds or millions of people, her models actually serve that many people.
As she has observed, oftentimes, developers build solutions that are very siloed to a persona or a niche, which can infringe upon certain groups of people. Thus, our responsibility is to think bigger and get a larger intention when creating software.
Learn by doing
In the AI space, there are classically-trained data scientists, mathematicians, and statisticians and there are enthusiasts who didn’t receive that kind of training. Believing in the power of learning by doing, Noelle Silver encouraged anyone who is working in the field or wants to work in AI to get acquainted with artificial intelligence by building the technology themselves.
All in all, the requirement for trust in AI systems is not a fad; rather, it is the distinguishing characteristic that will define who is suitable for business. Scaling responsible AI should be the priority for enterprises that are looking to thrive.