A Wary Eye on AI and Why…

You are Unstitution
27 min readNov 26, 2023

--

A Wary Eye on AI and Why…- updated May 2024

We’re not technology experts. This article is about proceeding mindfully with eyes wide…as we grapple with far-reaching possibilities and implications…stepping up our natural human creative capacity and potential using AI as a force-for-good.

All the while…with eyes wide and widening our lens…

AI technology is throttling forward and accelerating deep shift…while we’re still mired in deep sh*t…navigating through deep SHfTIf this was science fiction, it would be a fascinating cliffhanger…

While recognizing its potentially positive, enabling role, we’re wary of its real and looming dark underbelly — the negative human impacts and ignored externalities. It’s so important to consider AI from multiple, transdisciplinary perspectives. We cannot rely on mainstream sources and the algorithms that feed us, driven by the questions we ask and the consumerist and commoditizing patterns that have anaesthetized us for decades.

We all have much to learn and unlearn — a different kind of learning from that which our society has pervasively force-fed for too long.

The Industrial Age brought advances in many areas. Sadly, for us all, it marginalized, subordinated and left behind much Indigenous wisdom, deep-rooted cultural traditions, practices and ways of knowing and common[s] sense.

Reductionist, extractive, mechanistic and linear ways have become pervasive drivers of advancement and innovation, competition, domination and oppression. These have become so embedded, people often don’t see what’s missing, what’s grossly overlooked, and the slippery slope that perpetuates injustice, degrades our living planet and leads civilization into further decline.

AI will invariably reflect the thinking, biases and blind spots of the pervading paradigm and the people who create and program it.

The questions we pose say much about us as well. The clever, articulate and often sterile answers that AI spits out…are revealing.

Of course, the layered fullness and richness of human capabilities, stories, narrative and nuance are mostly absent. [Though no doubt, with every passing month, AI is growing more clever and sophisticated.] Answers offered in this way are never all-encompassing. Reliance on AI — ChatGPT and other Large Language Models (LLMs) — may also reinforce the consumer age of short attention spans, reliant on fast food information.

Relationships of mutual learning cannot be understood through the boxes and models that point at things. The thing(s) we’re pointing at — are not the real living, dynamic beingness.

We appreciate Nora Bateson’s work — imprinting transcontextual into our lexicon. When we re-contextualize, the always changing richness of living experience is naturally relational and messy. Many of us have been carrying this around, sometimes clumsily articulating what Nora beautifully conveys. A transcontextual lens helps us understand our lives and world(s) — rich with warm data.

“The mistakes of reductionism, the mistakes of individualism, and the mistakes of pre-determined linear outcomes of healing all validated a slogan of ‘better, faster, cheaper.’ Other realms of crisis are also currently rubbing against the issue of solutions that are perpetuating the problems. The need now is for more nuanced and contextually responsive ways to describe the issues so that other pathways of inquiry aside from linear solutioning move into play and begin to take form.” ~ Nora Bateson

If we idealize and defer to AI, what happens to our abilities to think and sense critically, to develop our full capabilities balancing whole living experience (left and right brain) ways of thinking, perceiving, intuiting and engaging? Surely the need to bridge that metaphorical eighteen inch journey between our heads and hearts and one another — and thereby discover holistic, integrative and creative alternatives — must take precedent.

Dr. Robert Epstein’s article unpacks yet more reasons why the human brain cannot be compared or equated with a computer. The empty brain: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer

Proof that there is no proof, and yet another indicator of ongoing pervasive efforts to mechanize what cannot (and must not) be mechanized. Aligns with what so many of us intuitively know, value (and want to believe) with ways of knowing that are not bound by reductionist way-out-of-balance Industrial Age left brain leaning idealization.

“It is not that I’m so smart. But I stay with the questions much longer.” ~ Albert Einstein

These questions are integrally connected to overlapping systemic societal issues: the everyday ways we learn, relate, lead and participate — live and work.

Sahana Chattopadhyay’s excellent series, opens our eyes to the dangers that easily slip by unnoticed:

“We are currently faced with ‘the tyranny of technology,’ where technocracy becomes the norm. One tool and one invention at a time, technology insidiously creeps up on us…”

She peels back more layers in this article, The Dangers of a Single Story — Part 1V: Belonging and Generative AI in an Age of Separation.

Sahana’s writing strikes at the heart of several interconnected and entangled systemic issues — patterns that have been accepted as normal in our industrialized society. These patterns reinforce the story of separation. They erode what it means — what it could mean — to be (fully) human. Leaning ever-further to technology and filling the space that disconnectedness and toxic polarization breeds — a dangerous trajectory.

More of these issues come to light in a MIT Technology Review series investigating how Artificial intelligence is creating a new colonial world order. Reporter, Karen Hao introducing the series, concludes,

“Together, the stories reveal how AI is impoverishing the communities and countries that don’t have a say in its development — the same communities and countries already impoverished by former colonial empires. They also suggest how AI could be so much more — a way for the historically dispossessed to reassert their culture, their voice, and their right to determine their own future.

That is ultimately the aim of this series: to broaden the view of AI’s impact on society so as to begin to figure out how things could be different. It’s not possible to talk about “AI for everyone” (Google’s rhetoric), “responsible AI” (Facebook’s rhetoric), or “broadly distribut[ing]” its benefits (OpenAI’s rhetoric) without honestly acknowledging and confronting the obstacles in the way.”

“My hope is that this series can provide a prompt for what ‘decolonial AI’ might look like — and an invitation, because there’s so much more to explore.”

Eshwar Sundaresan raises more questions and brings the inherent dangers of AI and ChatGPT into the spotlight, cautioning us to pay attention and calling for guardrails in his article: ChatGPT: Disruption or fantasy? He challenges,

“If the objective is to establish an economy that works for everyone, AI seems to be a very unlikely tool of choice.”

Eshwar warns of great existential and specific threats:

🔮 Wide-scale unemployment

🔮 Commoditized content

🔮 The threat to learning

🔮 Reality distortion

🔮 Systemic inequities

🔮 The threat hidden inside size

He expresses his concerns poignantly in these closing comments:

“…launch of ChatGPT is a watershed moment in human history. The airy souffle of AI we have experienced so far is giving way to the thick, creamy ice-cream that is the conversational bot. We have a narrow window of opportunity here to find, not a cork, but a regulated outlet for the genie’s bottle. We need to act before fundamentalistic capitalism dictates terms to society in yet another episode of the tail wagging the dog.”

“The idea that we must be fine with the depreciation of intelligence is shocking at best. The elephant is not finding substitutes for its trunk; the ants are not giving up teamwork; nor is the crocodile outsourcing the task of biting into prey. But to what extent are we willing to undervalue the very intelligence that defines us and even grants us supremacy?”

“A hundred years from now, somebody might look at this page of our history book and ask, ‘What were you thinking?’ Or maybe they would say, ‘Ah! Here’s where we began accomplishing what really mattered.’ For our collective sake, let us hope for the latter.”

More of us are questioning what we mean by intelligence (and its many forms,) how it does or doesn’t (fully) define us (alone) and the illusion of supremacy above all life. Separating our own living organism beingness from nature has produced the wicked melange of crises that humanity faces.

Tim Leberecht’s article also delves into the critical thinking and ethics that AI cannot and will not simulate: ChatGPT Makes us Human: The AI chatbot’s limitations allow us to appreciate our own. In a subsequent article, We Are All Going To Die, Thanks To AI, Tim navigates the edges, holding the inherent blurriness and dancing with paradox. He observes what is observable, muses about what is possible, surrenders to the mystery of what is unknowable, and beyond our control and passionately embraces what matters in life — living life to the fullest…joyfully and sorrowfully!

Thomas Klaffke shares insights and curates wide-ranging perspectives on LinkedIn. His comment and the subsequent quote he shares — rings bells:

“…signs that we’ll again NOT use this new technology to make our lives easier (i.e. less work, more play), but that we WILL rather use this new tech (as always) to make the economy more productive — i.e. making rich people richer — while upping the speed on the hamster wheel.”

“Modern societies are characterized by their mode of dynamic stabilization. This means that they can only reproduce their structure and maintain the institutional status quo by constantly achieving economic growth, technological acceleration and cultural innovation. This creates a ‘need for speed’ that requires individuals and organizations to constantly seek opportunities for rationalization and optimization. After all, it is us, the individuals, who need to grow, accelerate and innovate incessantly, i.e., to run faster and faster each year just to stay in place.”~ Hartmut Rosa

In his article, The carbon footprint of ChatGPT, Chris Pointon exposes the environmental impacts that are seldom referenced. This Forbes article about AI’s water consumption is yet another eye-opener. And this Informa-Dark Reading publication delves deeper still: Why Liquid Cooling Systems Threaten Data Center Security & Our Water Supply

“Data is not just an intangible asset that’s stored in the virtual ether of ‘the cloud.’ In reality, data, and data storage, is very much tied to the physical world. With the increasing introduction of artificial intelligence models, more data will be needed, and therefore stored in data centers.”

“Many operators recognize the strain traditional air-based cooling methods put on its finances and net-zero pledges, which is why some are opting for liquid cooling systems. Although liquid cooling is technically more energy efficient than air cooling, it can still negatively impact the environment — and opens other door for potential outages.”

“When we say “liquid” cooling systems, we’re actually referring to water in most cases. Just like living beings, data needs water to survive — and, therefore, so do AI models, software, and countless other technologies that rely on data.”

Of course, there are also growing opportunities for AI applications and tools that work synergistically alongside human capabilities and needs.

How to focus and direct AI towards positive, enabling potential ought to be its higher order priority…

Here’s just one cool example from TrendWatching, among many that demonstrate how AI is doing good: GPT-4 works as virtual pair of eyes for visually impaired people.

Given all the buzz and controversy about ChatGPT and the next generations of AI that are under warp speed development, these articles add depth and dimension, directing the flashlight at underrepresented issues.

The AI Dilemma sheds light on exponential implications with concerns about “releasing AI to the public — responsibly” and asking “What does responsibility look like?”

Hmm. A good, still hanging question being echoed by many…

Many questions arise from pervasive societal governance and accountability issues and an economic model that fails to value the wellbeing and flourishing of people and planet, above disconnected drivers of financial growth.

Humans are not machines, cannot be programmed like computers and therefore will never function in ways that (non-living) technology functions.

Isn’t this something to celebrate? After all, the existence of humanity must surely transcend transactional utilitarian value exchange as reflected on balance sheet valuations. Commoditizing the value of human life, of any living beings, has been veering humanity dangerously off course.

Or…is this something to mourn? Really? Have we, some of us (de facto) given up on humanity? Are we looking to and banking on AI to compensate for our inherent flaws? Some kind of techno hero that will save us from ourselves?

Who’s in charge? At what point will we awaken to the degenerating spiral of mechanized solutions that deepen the hole we’re in…dangerously hastening our progress toward humanity’s decline.

For those who attribute consciousness to AI machines and computers, David Bentley Hart holds up a mirror in his probing article, The myth of machine consciousness makes Narcissus of us all:

“The danger is not that the functions of our machines might become more like us, but rather that we might be progressively reduced to functions in a machine we can no longer control. There was no mental agency in the lovely shadow that so captivated Narcissus, after all; but it destroyed him all the same.”

Have we — the many — been asleep at the wheel, anaesthetized consumers, apathetic and helpless, unwittingly forfeiting our agency as citizens to the few? (More on this farther down…)

Computers are not human. They are presumably programmed to simulate some mechanistic processing functions…but cannot represent the living human brain as a separate function isolated from life.

Humans design and program computers. Humans decide or choose, consciously or unconsciously, how to use them.

A degenerative learning cycle is revealed in this article: The AI feedback loop: Researchers warn of ‘model collapse’ as AI trains on AI-generated content. Author Carl Franzen explains,

“…as an AI training model is exposed to more AI-generated data, it performs worse over time, producing more errors in the responses and content it generates, and producing far less non-erroneous variety in its responses.”

Yikes!

Turquoise Sound TaoTeTurquoise takes us on a whirlwind, whimsical ride, an exposé that alternates between terrifying and exhilarating. Buckle up: “The Doomsday Clock and A Power Too Great to Understand”

There are lots of either-or perspectives drawing lines, leaning heavily on the side of good or bad.

Charles Eisenstein also weighs in here, suggesting “Technology is not inherently good or bad. We need to teach it the right values.”

Author/journalist Federico Guerrini, serves up a daily LinkedIn newsletter that stirs thoughts, highlighting the latest frontiers that might delight and/or terrify. Here’s a sample — Bitter-Sweet AI: From Protecting Biodiversity to Erasing Crucial Evidence.

In his article, AI in The Wild West: The Call for Virtuous Systems Over Regulation — A New Perspective on AI Responsibility: Building Virtuous Systems that Are Sustainable and Outshine Regulation, Cezary Gesikowski urges a balanced approach harnessing the virtuous potential of AI designed with ethics and sustainability consciously embedded within its purpose and architecture.

Drawing from the work of Philipp Hacker and others, Cezary explores pathways to a future that is neither dystopian, nor caught between the dysfunctional pendulum swings of irresponsible profiteering or heavy handed externally-driven government controls.

“AI future doesn’t necessarily have to be a dystopian narrative of unbridled authority and ethical blunders. It could instead be a future where AI systems are architected with virtue and respect for our planet, one where both sustainability and alignment aren’t just an afterthought but a guiding principle. A world where the AI industry acknowledges and acts on its responsibilities towards society and the environment.”

The genie is out of the bottle. Certainly a time for healthy vigilance, widening our sense-making lenses and taking thoughtful action when and where possible. Much to learn…More than ever, we are challenged to develop and tap our full human capacity and potential.

Identifying relevant questions, raising awareness about issues and opportunities, understanding the implications, learning about the limitations and potential of AI and considering the ethical issues, accountability and safeguards — these are some of the ways to widen our lens and begin to activate new pathways.

🔮 If we rely too heavily on AI, will we increasingly fall behind in developing our own human intelligences, cognitive capabilities and potential?

🔮 How will biases programmed into AI and the resulting distortions further erode our sense-making and societal wellbeing?

🔮 What happens when we defer increasingly to specialization, in this case, the technology experts?

🔮 What can we do as humans — actively engaged citizens, leading in some new ways — to develop and deploy our own individual and collective intelligence and creativity as a force-for-good?

It’s impossible to fully explore AI without venturing into multiple topics concerning human and natural intelligence, education and learning — all domains of society and its institutions. Everything is interconnected.

Learning to see, discern and understand the patterns that got us here — in our own lives and among us collectively — is vitally important. Learning how to shape (and flow with) new (emergent) patterns, ways of being, relating, meaning-making and valuing — is a creative process that we cannot relegate to the technology, the computers or the machines that we devise.

“Some of the clearest patterns I’ve seen, I have seen in the last few years. I think the creative mind, the creative person, the creative human is able to see patterns. It’s pattern recognition we see that others sometimes don’t. It’s seeing the things you’ve always seen — but never have seen. And seeing the pattern of what’s there and what isn’t. Is it the notes or the space between the notes that is the music? Is it the buildings or the space between the buildings that cities are built on? Is it the one liner or the space between the one liners that is the humor? Is it the idea or the space between the ideas where understanding happens?” ~ Richard Saul Wurman, American architect and TED conference creator

Taking a step back — drawing definitive conclusions about humanity or AI — like so many issues — can turn rich exploration into ping pong debates. Binary thinking underlies many of the problems we face. It exacerbates the polarizing patterns that keep things stuck. Too often and invariably, it leads nowhere.

At some level, our opinions or positions will not help shift the trajectory toward healthy regenerative progress, if the discourse and decisions are insulated within silos, techno empires, thought leadership circles and lofty positions of power. We cannot make real progress from the same old comfortably narrow decontextualized vantage points or unjustly positioned advantage points.

We must see and understand more of the proverbial elephants from various angles. And our inherent messy humanness means that no one (group) of us can truly see and understand the entire complex web of entangled issues and implications. Nor can we reliably predict the downstream consequences of our decisions and actions.

Our society has veered off course in ways that are dehumanizing and damaging to our living planet. Humanity runs into serious trouble when we lose balancing perspectives and/or function as if we’re masters or centres of the universe. The history of our civilization contains ample evidence; the cumulative effects are unprecedented and daunting. Hindsight doesn’t reliably ensure the foresight needed to navigate wisely. We all need a huge dose of humility to find our way forward from here.

Bringing business, government and civil society into reasonable balance will require us humans working diligently at the intersections and across all domains, co-intelligently breaking out of the pervasive patterns that got us here. And it’s not an Einstein, who will heroically achieve this feat. It requires many, many more of us — all citizens on planet earth — to help shift the trajectory toward wholeness, tapping into a much wider and deeper reservoir of individual and collective intelligence, creativity and wisdom.

Logic will get you from A to B. Imagination will take you everywhere” ~ Albert Einstein”

We put ChatGPT to the test, a very simple one, asking it to summarise the key takeaways from Sir Ken Robinson’s classic TEDx, “Do Schools Kill Creativity?” If you’ve seen it, watch it again…If you haven’t, enjoy…

Here’s the instant AI result:

1. Creativity is an essential part of human existence and a fundamental part of our education system. 2. The current education system is focused too heavily on standardized tests, which stifles creativity and fails to nurture the talents and interests of individuals. 3. It is essential to create an educational environment that encourages creativity and helps students to explore their own interests and abilities, rather than conforming to a one-size-fits-all approach. 4. Education should be about enabling people to find their own paths and to develop the skills and capabilities that will enable them to become fulfilled and successful in life.

Ho hum…True enough…kinda sterile and forgettable…moving right along…

Recognizing that the quality of ChatGPT output is only as good as its database input and the way we frame our questions…we then asked, What makes Sir Ken Robinson’s 2006 TEDx — Are Schools Killing Creativity — compelling and inspiring?

The answer:

Sir Ken Robinson’s 2006 TEDx “Are Schools Killing Creativity?” is compelling and inspiring because it challenges the audience to think differently about education and how it is currently structured. He speaks about the importance of creativity and how it is being stifled by traditional schooling methods, which he believes is leading to a decrease in students’ engagement and overall wellbeing. He also offers solutions to the problem, such as offering more personalized education, focusing on students’ strengths, and creating an environment that encourages creative thinking. By highlighting the importance of creativity and offering practical solutions to the problem, he encourages his viewers to think critically about the current educational system and how it can be improved.

The result, perhaps marginally better, points to some variability depending on the question posed. It doesn’t substantively change anything. It doesn’t address overarching concerns and certainly is no match (yet) for human creativity and potential in its myriad and wondrous forms.

🔮 AI doesn’t capture the essence of the talk…the humour, the delicious sparks of wit, the poignant truths.

🔮 AI can’t take the place of the experience and stories.

🔮 AI can’t convey the purpose that inspired him, nor reawaken the creativity that is still being squashed during early childhood years and well beyond…

🔮 AI is no substitute for nonlinear human relational flow — generative dialogue and sense making

The convenience and speed of ChatGPT can save time and eliminate some mind-numbing tasks. It can take on some of our cognitive assembly line processing.

However, it cannot take the place of human creativity and potential. The danger lies in:

🔮 Relying too heavily on shortcuts that circumvent our creativity.

🔮 Accepting AI output as (whole) definitive unexamined truths.

🔮 Slipping ever-farther-down the addictive slope that the Industrial Age greased for decades.

🔮 Subordinating inherent creativity to the auto-pilot functions of AI.

Perhaps this is a telling harbinger of AI’s utility and limitations and the need for us all to step up and into our full human capacity and potential. Surely AI is only as good as our ability to use it wisely as a force for good? That is the big test being unleashed with every passing day as we experiment and evolve for better or for worse.

Perhaps the big hairy challenge before us is to foster, celebrate and amplify our human creative capacity from childhood onward in myriad ways, before allowing the hairy hand of AI to overshadow our humanness by passive default, unleashing monsters that surpass our own (untapped) capacity?

“The Case Against AI Everything, Everywhere, All at Once,” is very well expressed in this article by Judy Estrin.

“…Once again, a handful of competitive but ideologically aligned leaders are telling us that large-scale, general-purpose AI implementations are the only way forward. In doing so, they disregard the dangerous level of complexity and the undue level of control and financial return to be granted to them.

…While they talk about safety and responsibility, large companies protect themselves at the expense of everyone else. With no checks on their power, they move from experimenting in the lab to experimenting on us, not questioning how much agency we want to give up or whether we believe a specific type of intelligence should be the only measure of human value…”

Canadian, Bart Hawkins Krebs’ March 2024 article, The existential threat of artificial stupidity, brushes past the commonly posed question, “What if AI falls into the wrong hands?” He declares…But AI is already in the wrong hands,” and kicks off from there, exposing more naked emperors along with the implications and ramifications entangled with a society out-of-balance, driven by tech billionaires running rogue. This is Part 7 in his ongoing series. Concluding paragraphs:

“Artificial intelligence, then, represents an existential threat to humanity not because of its newness, but because it perpetuates the corporate imperative which was already leading to ecological disaster and civilizational collapse.

But should we fear that artificial intelligence threatens us in other ways? Could AI break free from human control, supersede all human intelligence, and either dispose of us or enslave us? That will be the subject of the next instalment.”

Big questions, continue to cast shadows including:

🔮 What security and privacy risks, breaches and violations loom?

🔮 What ethical and legal safeguards are being considered and developed and by whom?

🔮 Who’s paying attention and who cares about mitigating or balancing the risks of AI?

🔮 How does the pervading model of runaway economic growth continue to feed the underlying dangers of AI?

🔮 How does AI [continue to] exacerbate the meta-perma-poly crises?

🔮 How might AI contribute positively toward a life-affirming wellbeing civilization and economy? What are the real[istic] possibilities?

If AI is used primarily to cleverly game the system, to gain competitive advantage for economic growth…this zero sum game ultimately hastens humanity’s downward spiral.

Our youth are inheriting a world of complex challenges. This is a time when we must value and develop the best of individual and collective human creativity and spirit…arts and science…to explore deeply…to innovate in diverse ways…to co-discover breakthroughs…

Are we awake yet?

We can’t passively leave the important questions to AI, dulling down our senses and sensibilities. Time to inspire our full capacities and agency, being the life-affirming changes we want to see…using AI ethically and mindfully to enable us to do a better job on the right things.

This February 2024 Center for Human Technology podcast episode with Audrey Tang, Taiwan’s Minister of Digital Affairs, explores proactive strategies to support “healthy information ecosystems and resilience to cyberattacks and how to “pre-bunk” deepfakes, and more: Future-proofing Democracy In the Age of AI.” What can we learn from Taiwan’s approach and experience?

As we scramble to learn more about AI, pondering big ethical questions and dilemmas and test-driving its latest and greatest functions — we’re reminded time and again that subjects like these cannot be understood out of context or in a vacuum.

Being vigilant and looking at AI through a transcontextual multi-dimensional prism, we have choices. We can:

🔮 Selectively and mindfully lean into some of the convenience and auto pilot features that AI affords

🔮 Free ourselves up from some routine tasks, allowing AI to perform them

🔮 Play within the AI sandbox as part of a wider, richer, iterative, creative process

🔮 Train AI, drawing from and inputting much wider, culturally diverse transdiciplinary sources of information, as part of our wise prompting and deeper inquiry

🔮 Challenge ourselves to ask better, more challenging penetrating questions

🔮 Direct our energies to the priority challenges and opportunities that reach beyond AI’s (current) capacity

🔮 Focus on ways to unleash our creative spirit as a force for human and planetary flourishing

🔮 Embrace our aliveness and continuously challenge AI’s degenerative impact and footprint on the living planet we call home

Ironically, the expanding frontiers of AI nipping at our heels, challenges us to live into our full human potential…to celebrate our diverse creative and multi-perspectivist capabilities.

Creativity. Neuroscience. Art. History. Culture. Learning. Philosophy. Stories…

The way humans process is more than the speed and combination of aggregated data points.

Alexander Beiner weaves together threads that open portals to deeper thinking and wider possibilities in his article, Lesser Gods: AI Creativity, Neuroscience and the Future of Work — Why AI creativity isn’t what it appears to be

“If much of our economy relies on workers selling their thinking and creativity, and AI can now meet that need, is AI as ‘creative’ as you or me? Can it really take the place of human beings who make their living strategising, thinking and solving complex problems? To start answering that, we need to follow a loose thread that connects economics to neuroscience, philosophy and aesthetics. To flip the tapestry of this historical moment upside down and examine its chaotic underside for a clearer picture of what might be going on.”

“What I’m speaking to here is something that AI can’t touch, but that every person, artist or otherwise, can access: what it feels like to create. To be connected to the world, and responding to it with your own unique perspective by enacting something, whether a dance, a painting, a book or a simple word. Simply put, the art of being alive.”

“What is being asked of us as we try to make sense of AI is to define what it is about creativity that makes us human.

“As AI advances, we need to ask ourselves what we want to do collectively, and how we can use this as an opportunity to change the world consciously. To learn how to use our technology wisely, instead of allowing it to use us. We probably don’t know how to do that yet, and that’s fine. We can create something new.”

In praise of human creativity…

Being fully human begins inside each of us with the stories we tell ourselves and the way those stories shape our lives…our relationships…our work and how we make meaning of the events and circumstances that affect us. New life-affirming patterns evolve through our creativity, our natural messiness and our potential…

James Allen captures the essence beautifully in his May 2024 article: AI & the flat-packing of the human experience — On intelligence, machine metaphors and human creativity.

… “and perhaps our greatest defence against sliding into this beige hellscape, is that creatives and artists continue boldly to carve out and occupy the spaces where no machine can reach, to create art that is unmistakably ensouled, that carries with it all the hallmarks of having been made for an embodied entity and made by an embodied entity. We know these works when we experience them, because we experience them not as a sugar-rush, but as an intimate invitation into states of mind and ways of being that may have been previously hidden from our view. We know them also because they are the works that reach our hearts. Once we experience them, we are forever changed.”…

This is a paradoxical dance. Human and technological. Simple and complex. Shadow and light. Emergence and strategy. Citizens co-empowered everywhere and anywhere can tangibly breathe life into our ongoing everyday stories of interbecoming — connected and interdependent in ways that enable all of us to navigate and thrive.

In The Walrus article, In AI Is a False God: The real threat with super intelligence is falling prey to the hype, Navneet Alang puts AI into meaningful, meaning and sense making context:

…”Life and its meaning can’t be reduced to a simple statement, or to a list of names, just as human thought and feeling can’t be reduced to something articulated by what are ultimately ones and zeros. If you find yourself asking AI about the meaning of life, it isn’t the answer that’s wrong. It’s the question. And at this particular juncture in history, it seems worth wondering what it is about the current moment that has us seeking answers from a benevolent, omniscient digital God — that, it turns out, may be neither of those things.”…

…”But the fixes are difficult to implement because of social and political forces, not a lack of insight, thinking, or novelty. In other words, what will hold progress on these issues back will ultimately be what holds everything back: us.”

…”But when the systems that give shape to things start to fade or come under doubt, as has happened to religion, liberalism, democracy, and more, one is left looking for a new God. There is something particularly poignant about the desire to ask ChatGPT to tell us something about a world in which it can occasionally feel like nothing is true. To humans awash with a sea of subjectivity, AI represents the transcendent thing: the impossibly logical mind that can tell us the truth.”..

Being ‘Roughly right’ is ‘Always’ and ‘Never’… ‘Good Enough,’ reflects this ongoing paradoxical dance of navigating wisely and well in the midst of uncertainty, complexity, knowingness and unknowingness…

Can we understand AI as a narrow slice within much broader contexts? Can we use AI wisely and well, to super charge some of our important, purposeful life-affirming work? Can we place a whole lot more emphasis on our collective coherent agency? Why not?

It’s time for us all to step up and into our personal and collective agency as human planetary citizens — leading and contributing in some new ways and honouring and restoring some lost, buried or forgotten timeless ways:

🔮 Bringing…combining diverse perspectives and lived experience together

🔮 Finding the common[s] ground

🔮 Working from the commons space between

🔮 Discovering how to truly collaborate, co-create and cross-learn

🔮 Tapping individual and collective creative, imaginative capacities

🔮 Identifying and addressing key questions, issues and opportunities

🔮 Testing alternative ways of being

🔮 Seeding and adapting new workable patterns.

We call it “common[s] sense,” and “wisdom with teeth”

Love this soul-swirling, life-pulsing, far-reaching, deep-diving May 2024 piece that Nora Bateson shared: Still Moving, not a Still Frame.

Threaded into the rich messy tapestry, Nora acknowledges AI with a twirl…putting it into living perspective within the much larger, always moving ecologies of life and beingness. While difficult to pluck and favour one small morsel from this delicious combination, chose a tiny sample of wisdom

“Tech is certainly something to be careful with, but so are words, so is food, so is a garden, so is being a friend. This is not about vilifying technology, it is about becoming alert to those deeper habits that have outsourced care in all aspects of life to a two dimensional world.”

Take in the full nourishing experience. Let Nora’s words wash over you… reach in…touch..provoke…inspire…

We can’t unsee what we see…unfeel what we feel…unsense what we sense and experience. AI can’t touch what it means…what it can mean…to be human.

As we allow and invite the messiness of life…as we refuse to be boxed-in… as we resist the mechanistic gears of the Industrial Age that still draws compartmentalizing, decontextualizing, artificial thingifying lines — we can choose to go with the inherent flow and pulsating of life.

While AI enters into the ever-moving pictures, adding and subtracting from the ever-shifting stories and storylines — AI is not THE WHOLE PICTURE and AI is not the WHOLE STORY.

Movements are made…by moving…not by naming, fixating and declaring. Moving reflects livingness and beingness.

As past, present and future merge…we might look back and point at transformative movings that moved us and mattered…

Most of the reflections and references included here convey a need to widen our contextual lens — to access, use and combine all our senses and sense-making capacities. In essence we are challenged to reframe what we mean by intelligence beyond component parts — to embody our co-intelligence, as Tom Atlee explores and implores in his superb new book, Co-intelligence: The Applied Wisdom of Wholeness, Interconnectedness, and Co-Creativity. Herein we might co-discover wide and far-reaching co-resonance that can enrich our ongoing learning journey toward interbecoming wiser and a whole lot more humble — living into an unknowable future as we endeavour to shape life-affirming patterns. Co-intelligence widens our eyes, illuminating patterns and pathways available that far exceed the narrow band that AI represents. This compendium, synthesizing decades of research and work, builds from the wise democracy pattern language. In ongoing conversations with Tom, we’ve been exploring how to nurture wiser co-intelligence patterns and extend the horizons of possibilities that can make this work, in its multiple forms, more widely accessible. Retrospective and forward-facing, we can all gain and expand our relational and sense making capacity, co-activating co-intelligence as engaged participants and care holders in our wiser human evolution — within the contexts of ongoing every day work and life!

The latest and burgeoning AI developments bring us back to focus on human creativity — the amazing examples that abound and those yet unnoticed or untapped. Under any scenario, as more AI rolls out…let’s focus our energies on the many ways to inspire, develop, apply and amplify the creative passions, capacities and potential of citizens anywhere and everywhere. As we learn, work and play — living into our full(er) humanness, consciousness and co-intelligence for the common good, deserves our undivided attention.

It’s time to (re)claim and channel our inherent , curiosity and joyful spirit in little and big ways that can make a real difference… …along with a wary eye as we deploy AI.

Let’s dance!

Footnote: The various references and resources included are by no means exhaustive. Since first publishing this article on LinkedIn, it’s being updated with additional perspectives and links. Latest additions in March, April and May 2024, pointing to more dangers and choices, are the articles by Bart Hawkins Krebs, Informa: Dark Reading and James Allen. Nora Bateson, referenced in several of our articles, shared her powerful new piece in May 2024. The Center for Humane Technology podcast with Audrey Tang unpacks Taiwan’s proactive approach. And, having read Tom Atlee’s new book, Co-intellgence, we potentially open more eyes toward the unimpossible — the life-affirming possibilities available and yet untapped. Combined…these add up to a long read. Hope folks will loop back whenever time permits. (Those who prefer videos, podcasts or webinars can find plenty of options.)

As we consider various scenarios, our best outcomes are potentially gained investing in undivided efforts. The deeper underlying message — obvious by now — urges us to continue doing our own due diligence and not allow AI to (inadvertently, subtly, or intentionally) hijack our own sense making and distract us from our most important work and challenges. Vigilance is advised and dark shadows continue to loom. The challenges posed are intertwined with patterns that cut across all domains of society. Working collaboratively across disciplines, sectors and all the divides — in real time — can go much farther towards learning and applying best practices as we navigate this bumpy ride that transcends AI. 💁🏻 💁🏽‍♂️ 🤦🏻‍♀️

Unstitution was birthed as a collective creative commons and nested ecosystem. We (co-)catalyze and support collaborative communities, initiatives and coalitions where people from across sectors, disciplines, cultures, generations and walks-of-life work together on mission critical issues. From readiness through to regenerative progress — moving beyond polarization — is how we roll. The links embedded throughout this article are a warm invitation to go a bit deeper, at any time. For more insights reflecting our ongoing journey, our suite of Unstitution articles are published on Medium. They portray a small sample of the ways we are adapting and contributing among ever-expanding commons-based communities and initiatives inspired and fuelled by citizens — perhaps better described as denizens — anywhere in the world — living into the principles and spirit that govern our collaborative work.

You can follow Unstitution and engage with us on LinkedIn. Many of our posts and perspectives also pop up under hashtags #messyhumanness and #wisdomwithteeth.

Sharing is caring. Please take a moment to tap the 👋🏼 icon if you liked this article. Medium lets you clap up to 50x on any articles you appreciate. Good to know :-)

--

--

You are Unstitution

Unstitution’s mission is bold and hearted-centred: to Reboot Society’s Operating System.