(Oslo 13/06) Trustworthy AI — How can Trustworthy AI give you a competitive advantage? Free event
The event provides various perspectives on Trustworthy AI, how it can be used to create a business advantage, increase efficiency and grow your business
Hosted by IBM, PEDAB, and Nordic Edge. Click on link below to register and learn more here:
About this event
How can trustworthy AI provide your company with a competitive advantage? And what are the risks of not leveraging AI due to the lack of trust?
Interest in Artificial intelligence has exponentially increased, requiring new methods and new ways of thinking.
This is a critical time, where keeping up with the change is essential if you want to keep a competitive advantage in the market.
During this event you will get a better understanding of what Trustworthy AI is, and how it can help you improve your business, become more efficient and eventually make more money.
Programme
The event promises an insightful programme that provides various perspectives on Trustworthy AI, how it can be used to create a business advantage, increase efficiency and grow your business.
14:00–14:15: Arrival and mingling
14:15–14:25: Welcome and introduction by Nordic Edge, Pedab and IBM
14:25–14:45: Alex Moltzau from NORA sets the stage
14:45–15:00: “Assurance of AI-enabled systems” by Christian Agrell, Principal Scientist at DNV Group Research and Development
15:00–15:15: Presentation (tba)
15:15–15:30: “Gaining a competitive advantage through governed AI” by Hans Petter Dalen, AI Governance Leader for EMEA at IBM
15:30–15:40: Break
15:40–16:30: Panel discussion and Q&A from the audience
- Moderator: Abbey Lin, founding member of Oslo.ai
- Pål Hetland, Bureau Chief at Pronto AI
- Lars Rinnan, CEO of Amesto Nextbridge
- Michael Velle-George, Head of Customer Analytics at Fremtind
- Christopher Wilson, Researcher at the University of Oslo
- Alexandra Kleinitz Schultz, Adviser at Digdir
16:30–19:00: Afterwork with food and drinks
The event is free, but requires registration HERE.
What is Trustworthy AI?
Trustworthy AI is a new way of thinking about artificial intelligence that aims to address the flaws of an older approach called Explainable AI. Explainable AI was developed to create machine learning models that could explain how they arrived at their decisions, but it was criticized for being too technical and not considering the impact of AI on society.
The problem with Explainable AI is that it assumed that transparency and interpretability were always good things, but this is not always the case. Sometimes, being too transparent about how a system works can actually make it easier for people to exploit or copy it. Conversely, if a system is completely opaque, it can erode trust and even endanger human life.
Trustworthy AI aims to take a more holistic approach to these issues, considering not only technical factors but also the impact of AI on society as a whole. By doing so, it aims to create AI that is more ethical and beneficial for everyone.
The EU have defined 7 key criteria for Trustworthy AI:
- Human Agency and Oversight
- Technical Robustness & Safety
- Privacy & Data Governance
- Transparancy
- Societal & Environmental Well-being
- Accountability
- Diversity, Non-discrimination & Fairness