When AI Imitates Life
Receiving exciting professional news is a gratifying way to end 2018, except when the accolades are given to someone else! With a name like Joe Rogers, I expect lots of doppelgangers online, but I didn’t anticipate another Joseph Rogers getting tagged via LinkedIn’s AI software for my company, WorkDone. Imagine my surprise recently seeing my interview in Forbes with this fellow’s picture instead of mine:
My new friend Joseph Rogers (we’re now connected on LI) kindly alerted me to the mistake, which got me thinking about how LinkedIn’s algorithms could confuse ‘Joseph T. Rogers’ with ‘Joseph Rogers’ — we don’t have the exact same name and we certainly don’t look alike. I chuckled at the irony of being the CEO of an AI-based software company having my photo mistaken for someone else by an artificial intelligence algorithm. How meta!
Finding actionable solutions to these blunders is crucial because poorly tested implementations of AI can be tragic as in the use case of facial recognition in law enforcement. While LinkedIn’s mix-up was certainly harmless (and funny), it is a reminder that the AI industry needs to agree on a code of ethics that serves a diverse group of stakeholders — not just the people who create it.
As AI technologists, it’s our responsibility to not only create tech that does no harm but actually moves society forward and improves people’s lives — ALL people’s lives.