Modern AI Challenges — Balancing Technical and Ethical Imperatives
As we enter the summer season, it’s a good time to reflect on recent “hot topics” (Dad Joke as my kids would say).
At the risk of stating the obvious, we are deep enough into 2023 to clearly recognize this year as a global landmark in Artificial Intelligence (AI). This is a turning point that plants an historical stake in the ground — the kind that permanently and significantly affects the entire world in numerous ways well beyond technology alone. The new “AI Age” ripples into business, education, entertainment, and much more. Impacts on every-day life, society at large, and (perhaps ironically) humanity will soon be irreversible. I say this with a positive view, but also with concern for safeguards as recognized in:
- The AI “Open Letter“ of March 2023, signed by Elon Musk, Steve Wozniak, and ~1,100 others (here and here).
- Mass feuds that have already started (slander lawsuit example1, cinema entertainment protest example2).
Rewinding a Bit
The timing is a bit eerie. Shortly before OpenAI made its huge, public splash last November with ChatGPT in the field of Generative AI, I started co-authoring a modernized data science and engineering book for a major publisher. A cornerstone of the book is based on a methodology proven for several years in various small to large Fortune 100/1000 organizations. It ultimately debuted as “SEAL” here, here, and was presented at DataCon (www.DataConLA.com).
SEAL stands for the Scalable Enterprise Analytics Lifecycle. In no way does SEAL directly compare or compete with Generative AI programs like ChatGPT (by OpenAI), Bard AI (by Google), Bing AI (by Microsoft), Chatsonic/Botsonic (by Writesonic), Pi (by Inflection AI), or others (learn here). SEAL provides an agile framework to swiftly produce AI/ML and analytic solutions across multiple data science disciplines — Generative AI being just one field which has exploded in popularity over the past several months.
Charging Ahead — Holistically
One of SEAL’s tenets regarding AI acceleration is to guide how AIs can feed each other: often with humans in the loop, but sometimes without — via what we call “AI2AI” hyper-automation patterns. That can be a bit scary. As a counterbalance to ensure AI power is not wildly unleashed, SEAL includes lean “By Design” checkpoints to ensure Quality by Design, Security by Design, Privacy by Design, Interoperability by Design, Performance/Scalability by Design, and more… including, you guessed it… “Ethics by Design.” This latter addition provides a way to squarely address the concerns raised by the now-infamous AI Open Letter.
Let me describe this from a more holistic perspective, and a tinge of personal introspection. As a CTO and data scientist/engineer, I see a technical imperative to address the AI topic head-on. It’s in human nature to innovate more and more as we conjure and solve challenges across progressive levels of abstraction (here). Now the flip side. As a parent, husband, son, brother, friend to many, and participant in society, I also see a moral imperative to ensure AI, like any resource, is not abused. The trick becomes a balancing act of risk versus reward — which is purposefully addressed in fields such as “Ethical AI” (see Harvard Business Review here) and “Ethics by Design” as manifested in SEAL (above) and various articles (e.g. Salesforce blog here, and a European Commission paper here).
Balancing Act
Examining issues and angles around this balance can fill a small library. Let’s narrow things down for the remainder of this short post. One of the hottest topics in social gatherings today is how AI will impact the modern workforce. And what fields should students focus on to forge a viable career path that AI will not erode. Along these lines, here are some starter thoughts:
- General Recommendation:
Learn how to leverage Generative AI such as ChatGPT. 90% of potential employers reportedly want people with this experience (here). Also be aware of the wider AI, Machine Learning (ML), and overall Analytics spectrum — learn to use different algorithms and techniques when applicable. Some classic examples include predictive and prescriptive analytics (often used to optimize business goals based on Key Performance Indicators (KPIs) and much more, thru the use of Supervised Learning), anomaly detection (via Unsupervised Learning), image/audio/video recognition (usually based on a special kind of Unsupervised Learning called Deep Learning), and Reinforced Learning (for Natural Language Processing, etc).
One tricky thing about the AI/ML arena is that things overlap. For example, a recommendation engine is classically built on Supervised Learning models, but can be powered by Unsupervised algos as well. It can take a while to learn the diverse statistical opportunities in this field. The good news is that you can achieve substantial results by initially just sticking to classic use cases. This Gartner article does a pretty good job at describing the landscape (here). And these show some of the fuzziness (here and here).
Now that we see how broad and deep the AI space actually is, using a Generative AI tool like ChatGPT all of a sudden seems pretty simple, eh? It’s certainly a good place to start if you’re just starting your AI journey. That said, remember there’s a big difference between using AI, and creating it. But all of it is fun :)
- Follow-up Article/Book (Stay Tuned):
There are some loose ends in this post. For example, I intend to write follow-up content with some specific examples around the “By Design” principles incorporated into SEAL. In addition to certain meritous shift-left pinciples like Quality by Design, Security by Design, and so forth (discussed above), seeing some creative points around challenging goals like “Interoperability by Design” and “Ethics by Design” will help people better understand the “Imperatives” part of this article’s title. Much of that writing is going into a book I’m working on for a major publisher — and understandably they don’t want too much in blog form or scattered around presentation forums. But I expect some of the fun foundational 🧱 stuff to end up in another post.
- Personal Prediction:
Just as we’ve seen Millennials and Gen-Zers become digital natives, we are now going to see a new generation of “AI natives.” While the term “native” indicates a repetitive pattern, the AI landmark discussed above introduces some nuances. Digital natives were raised under the typical parental dilemma of determining how much is too much “screen time.” Impact on social skills, recreation options, etc are the inevitable outcomes. As modern as digital natives are, however, droves of them already perceive many Generative AI use cases as toxic. Case in point is Disney’s recent debut of a major new TV series, “Secret Invasion,” where the entire opening sequence was created with Generative AI. A key article by Decrypt (here) explains how both fans and creators are horrified. Even more unsettling than the use of the word “horrified,” the article shows how people on both sides of the fence are protesting in consensus: “fans and creators” alike. In a way, it’s good to see a unified front — but the ultimate outcome will be a factor of desensitization. That is, the children of these modern digital natives will be “AI natives” — and they are unlikely to blink at the distinction between human and AI generated content. Most of them will likely see both as positive art forms. At best, it might take another couple generations for something this drastic to be assimilated into the norm of society. This delay could be the factor of AI natives’ parents instilling biases that are carried into the next generation. But carried on in an arguably diluted manner. The eventual outcome imho will be acceptance. I suppose that picture was painted as soon as I uttered the term “assimilation,” reminiscent of Star Trek’s “Borg” culture (someone agrees here).
All said, AI that balances utility with ethics can yield a healthy super-intelligence: one with bright horizons a la Star Trek’s well loved, sentient android, “Data” — instead of the doom imposed by The Borg’s colossal, dark gray cubes. To risk using overusing Hollywood references, another relevant credo is, “With great power comes great responsibility.”
Now for a procession of “so far” considerations. It is tempting to wield powers like AI without regulation — but so far we are blessed to have industry magnates who recognize the risks (hence the 2023 AI Open Letter). So far, humans have a theoretical upper hand in the AI ecosystem we’ve “innovated” our way into — where humans are the creators of AI, and AI has become a creator of content consumed by humans (and other AI). So far, humans have governance — perhaps best epitomized by the on-off switch. But is is already on the edge of tolerance (refer to the “mass feuds” bullet above). So far, humans can say yes, no, and think intuitively based on a blend of experience, free will, and imagination. Contrast that with AI which acts mainly on lessons learned — so far. The irony is that machines are arguably better at the lessons learned approach. Our weakness here is evidenced when (bad) history repeats itself. So let’s learn from the examples our own AI sets for us, and use them to generate… a better future.
Extra Insight and Actualization
That would be a good place to end this article. So please consider this final passage as a P.S. of closing thoughts. Let’s start by quoting a favorite author (and pilot), Richard Bach.
“We create our own reality” is one common theme interpreted from several of Mr. Bach’s books (here and here) written in the 1970’s and 1980’s, well before the mainstream uprise of AI and Virtual Reality. “State your limitations and sure enough they are yours” is the key challenge Bach’s character, Jonathan Livingston Seagull, learned to overcome by performing physical and spiritual feats far beyond the physical “limitations” of his own species.
By continually raising the bar of AI, we are in a way helping to evolve an “AI species”… ultimately and arguably to the level of sentience. To keep this article diverse and inclusive, let’s not even discuss what is above that. I will however add one more tidbit regarding Richard Bach. Ultimately, he refined the interpreted expression “we create your own reality” into something more surreal: “we create our own appearances… and as we change our thought, we will see the appearances around us change” (here). It’s fascinating how similar this sounds to Virtual Reality and Augmented Reality — although I don’t think that was Bach’s intention. Perhaps a blend of VR and AI will be the next Big Thing :)
Deep Thoughts about the Casual Cover Image
There are a few intentional aspects behind this article’s cover image (“Balancing AI Power with Ethics…”) that might seem odd at first glance:
- The Ethics Scale in the right pan is a scale — a scale within a scale. It would certainly be simpler to show just a “✅” and “❌” in the right pan. The nested scales show how ethical AI is (unfortunately) more complex than that. Along that same thinking, the positive checkmark and the negative x n the picure are the same color: green. It would be more conspicuous to have a red “❌” in sharper contrast with the green “✅”. But life isn’t always that simple — rights and wrongs are frequently not obvious at a glance.
- The AI Brain in the left pan is a realistic photo-style image, vs the Ethics image in the right pan which is cartoonish. This intentional inconsistency depicts how AI is becoming so mainstream and realistic, that abstract concepts such as Ethics are often viewed as potential fiction — or at least an optional component. Yes, this is dangerous. But there is silver lining in the AI Open Letter of March 2023 (discussed above). While it was quickly backshelved for a variety of reasons (out of scope for this article), that letter reflects general industry desire for ethical diligence. Now think of how GDPR policies and technology caught up with the Data Privacy rights of individuals. In fact, Privacy by Design birthed the whole “By Design” list of lean practices mentioned in this article — from Quality by Design and Interoperability by Design to Ethics by Design which is essentially the fulcrum of our double pan balance (cover image). Hopefully, tenets such as Ethics by Design will catch up with hardcore AI technology, just as Privacy by Design has done for Data Privacy.
- There are a couple additional nuances regarding the cover image. Instead of broadcasting spoilers for them, I welcome reader comments and questions — about anything (beyond the image). Feel free to reach out directly if you prefer — see Contact Info below. I’m sure we’d have a lot to learn from each other 🤝
About the Author
Jeffrey Bertman is a “data everything” specialist with major success stories in small to large Fortune 100/1000 and government organizations, including Warner Brothers, Verizon, Comcast, GEICO, Airlines, Cigna, DoD, DoJ, FDA, and Gov Intel. He serves as CTO and lead data scientist/engineer for Dfuse Technologies, a world class consultancy with a data boutique and nationwide footprints in private and public sectors, e.g., Apple, Wells Fargo, Amgen, CVS Health, Deloitte, and numerous government agencies.
From strategy through operations and mentoring, Mr. Bertman leverages leading edge technologies to deliver high yield, enduring success stories that actualize real world improvement in market share, revenue, profit, quality, security, efficiencies, effectiveness, and more.
Mr. Bertman is a popular speaker on a wide range of technical and management topics. Business disciplines include accounting/financials, marketing, sales, social media, entertainment, digital transformation, legal, telecom, health care, e-collaboration, manufacturing, distribution, and inventory optimization.
Contact Info
Jeff Bertman • mobile +1 818–321–3111 • Jeff.TechBreeze@gmail.com • www.linkedin.com/in/jeffbertman • www.medium.com/@techbreeze • WhatsApp, MS Teams, Slack, Discord, Insta, FB Messenger, Zoom, etc available upon request • Dfuse Technologies (www.dfusetech.com) work email and other info also available.