Drew Klonsky
4 min readFeb 7, 2020

--

A more human future (part 1)

The quest for balance between technologies’ effect on our humanity and how we might shape it to our advantage is no longer the banter of futurists. Fueled by hope and hype, its voice is echoing throughout the media, testy client meetings and inebriated pub debates. We are at a juncture where moral considerations of the products, services and communication we serve can’t be ignored.

The amazing promise and the ugly

For decades, we put our faith in technology as the ultimate problem-solver, any kind of innovation tied to technological advances was heralded as the enabler of human futures, enhancing our humanity and our aspirations. The amazing transformation of performing tasks faster, accessing infinite information with virtually no cost is backfiring. Left unchecked, we now look back in retrospect with an empty sense of what openings were created by tech only to level a sense of feeling suspect of its use. A use that has sparked an endless stream of questions on the consequences and decisions behind corporations’ reckless use of data and tech.

If culture was the middleman brokering humanity with tech, this moment of introspection will be pivotal. Today Silicon Valley’s halo looks hazy whilst the world’s leading brands find themselves under a level of scrutiny for misuse of data that exceeds the capacity of government regulation or an average person’s attempt to comprehend what one has given up and what it is being used for.

The innocent promise offered by an unchecked digital world itself appears to be dystopian now. We are inundated with fake news, coaxed with fake business models (e.g. WeWork, Theranos), and captivated by the false promise of a gig economy that tempts value creation for ‘contractors’ at the behest of the stratospheric growth and profit motives of their “supplier’s platform”. Every day our interactions and experiences with machines are becoming more invisible as binary algorithms are curating our feeds. Unknowingly, this machine learning induces bias and hinders people from breaking through, and the rest of us from discovering.

The levels of digital disenchantment are profound. The Facebook — Cambridge Analytica scandal exposed the fragility of the promise offered by a more connected democratic world of digital. The responsibility of enshrouding a ‘my mistake, I’m sorry’ response by Mark Zuckerburg sparked a counterinsurgency of governance and individual awareness of who is watching the algorithms as we wonder if connectivity is more effective at spreading our vices through dopamine triggers and sect divisiveness versus drawing out the meaning of what is absolute and good for humanity.

The amazing pivot and the brilliance

The upcoming year will be littered with news reports of the “unintended consequence”. The worry where evolved tech and AI-led product design and communication walk the line of informed algorithms fostering conscious and unconscious bias inadvertently perpetuating the digital divide. Where machines read and write quantifiable data that act as the input and output of our values, cultures, experiences mimicking subtle nuances of humanity.

Wired founder Kevin Kelly declared in 2016; “The business plan for the next 10,000 startups are easy to forecast: Take X and add AI. #theinevitable“. The question to be answered is how can tech and AI act our companion on this journey empowering not just our future, but where we infuse humanity and soul into tech and AI.

A pillar to achieve this future is the recognition and action around a brand’s purpose. Though debatable if utilised through green (washing) or marketing stunts, the authentic representation of brands purpose needs to be concise to avoid dilution and corruption. Therein lies the tension, the ability for a brand to achieve the idealised outcome whilst managing the harsh truths of the business, the tech needed and meeting the ethical standards expected by people. In other words, business as usual, as it is conducted now, is not good business.

The burden or privilege to create a more human future is a complex responsibility. The appreciation of how tech and AI can make us more human? What futures do we want for our businesses, brands and ourselves? What simply defines our humanity? And ultimately how do we authentically realise it to see a shift where people, corporations and culture adopt humanity as the new minimum viable approach to tech which may seem complex on almost every imaginable level, the answer is deceptively simple: we must bring humanity back to tech.

The assignment of our humanity to tech is not self-evident. It must be distilled from our behaviours, aspirations, and designed in new contexts. Consideration, evaluation and monitoring what is good for all stakeholders to define the flip from to where the belief pendulum back to the original promise of tech — or as Tristan Harris, the co-founder of the Center for Humane Technology writes, “humanity is not downgraded by technology”, but the opposite.

Soon: The creative answer is to Make Good (part 2)

--

--