What is the value of Truth?

Dr. Ross Wirth
New Era Organizations
18 min readFeb 11, 2024

AI is not the threat; it is the users.
Dr Reg Butterfield ©
About 20-minute read.

Recent events have reminded me of my youth when I studied law at the beginning of my career. One of the objective principles of the law that I studied is “truth”. I learnt that the principle of objective truth must lie in the foundation of the construction of the entire legal system and the system of law, in particular.

Truth in law is simply “the actual state of things”. This is reflected in western dictionaries as

“the quality or state of being true”. In the scientific world it is often cited that truth is, “a fact or belief that is accepted as true.” Aristotle talks about truth as, “To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true” [Metaphysics, (1011b25)]. Some talk about four types of truth, objective truth, subjective truth, normative truth, and positive truth. It seems that the definition of truth can vary depending on the context in which the term is used. I suggest that a fifth view has emerged from the global powers who control the Internet, social media, and AI. I will let readers decide for themselves what this fifth version may be.

I mention this because the idea of ‘what does truth really mean’ came to my mind as I witnessed events online, read many emails, articles, and research papers, and felt it was time to discuss this idea of truth in today’s world of Internet, social media, and AI. It affects everybody, globally.

The ‘public’ world of OpenAI

This week has been fascinating for those of us interested in the progress of AI, and in particular, the organisations and the personalities behind Generative-AI. I make no apologies for bringing up the surprising behaviour of the Board of OpenAI, even though you will all have seen the chatter across just about every media outlet. The sacking of CEO Sam Altman, and the response of his workforce, sent shockwaves around the industry and the users of AI.

Whilst nothing in detail has been announced as to what really caused the sacking, my instincts tell me that what was publicly stated was what I term an ‘economical truth’; I suggest it is more likely about control of a potential money-making technology. What causes me to think this?

Before my ink was dry on writing about this subject, events have taken yet another turn. It seems that Microsoft have moved to protect their interests in their alliance with OpenAI. If the media reports are correct, they brokered a deal for Altman to return as CEO with a reconfigured board at OpenAI.

I guess that this reinforces that the original ideals that Altman and Elon Musk had when they founded OpenAI are definitely moving from a non-profit company to a money maker. As Musk tweeted in February 2023, “OpenAI was created as an open source (which is why I named it ‘Open’ AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.”

Events seem to reinforce his tweet and now it remains to be seen what will happen to the open public search machine Chat-GPT4. Don’t be surprised if it is allowed to slowly become obsolete over time, or at best used for some testing by OpenAI as they license their products for use by others who are not a threat to Microsoft’s dominance in the market.

Microsoft must be salivating at the thought of increased dominance in this exciting emerging world of AI.

Zoom and Broken Promises

Talking about AI and major companies, my concerns about Zoom and personal data are now heightened even more. A few weeks back, social media was full of comments and articles about Zoom changing its Terms and Conditions of Use to allow data collected from the customer online Zoom meetings and events to help ‘train’ their own AI.

In response to the kick-back from customers threatening to leave Zoom if they continued down this path, Zoom quickly reviewed their new policy. Organisations stated that they were now considering a move to other companies instead of Zoom. There is no shortage of options for users. For example, Google Meet ensures the user data is encrypted within the online meetings and no additional software is stored on users’ computers, phones, or tablets. The need for confidentiality and data safety is important for users and Zoom must have realised that they misjudged the situation.

Zoom updated their blog on August 11, 2023, claiming to “make it clear that Zoom does not use any of your audio, video, chat, screen sharing, attachments, or other communications like customer content (such as poll results, whiteboard, and reactions) to train Zoom’s or third-party artificial intelligence models.”

So far so good, Zoom seems to have realised their error and customers can feel more comfortable again; or maybe not.

My colleagues and I held a conference on Zoom on 10 November 2023 and enjoyed a discussion around AI and HR of the future. Our aim was to generate new ideas and different perspectives about ways forward. Because we wanted participants to feel comfortable in discussing off-the-wall, new, or innovative ideas, we decided not to record the event or publish attributed content.

It was a great discussion and we moved forward with our ideas in confidence. To our surprise, our colleague who hosted the discussion received an email from Zoom with a summary of the whole discussion. It was a very good summary that included names of the people and their comments. For confidentiality reasons the image below is just the header of the email content sent by Zoom to my colleague.

We had not requested this summary. On the contrary, the holder of the Zoom contract explicitly said no in the contract with Zoom where it asks about using data for any purpose, which would have included unrequested summaries such as that received after our Zoom conference.

If Zoom is not using confidential client data as per their August statement above, and reiterated in their Terms and Conditions of Use, what are they collecting it for? Nobody asked them to collect it and create a summary.

Personally, I no longer use Zoom and will be reluctant to join others in discussions using Zoom. I am reminded of a quote that I read some months back, “AI is not a technological problem; it highlights an organizational problem.”

Zoom, what is the truth about your data collecting activities?

If I was the CEO of a major company using Zoom, I would be very worried about how we use it. Just consider the risks of your shareholder value if confidential data, such as the summary we received of our meeting, is held by others not under your control.

“AI today is unbelievably intelligent and then shockingly stupid.”

Anybody who uses AI will know just how true this statement is and even Chat-GPT4 gives a ‘health warning’ about its results and suggests people check the facts independently.

However, another form of truth that is being tested is Black Friday sales. Yes, it is that time of the year again and the media is full of hype about savings of up to a third on a whole range of unnecessary goods. As the old saying goes, “if it seems too good to be true, then it probably isn’t true” comes to mind.

A consumer organisation in the UK has published a study where they analysed more than 66,000 prices in the six months before and after 2022’s Black Friday to see how their sale day prices compared. The result indicates 98% of the Black Friday bargains are not bargains at all. Yes, that’s right, in 2022 only 2% of the sales goods were cheaper than at some other time of the year.

It seems that the UK isn’t the only country that is economical with the truth about their discounts. Consumers across Europe are also complaining on social media about the same problem and according to media and survey publications on the Internet, this has been the case for many years. Yet, a lot of the media in the US today and across the Internet, are still extoling the benefits of buying on Black Friday, albeit the day has turned into a week nowadays.

Whatever the truth, wherever you are, happy hunting and let us know if your bargains are really bargains and whether they have made you happy. Just drop a line in the comments below.

An explosion of Gen-AI websites and media outlets

A Google search for ‘Gen-AI news’ found over 320 million results! A brief foray amongst them indicated that they were all competing for viewers keen to keep up with the rapid advances being made and the impact on organisations and individuals alike; many just happen to have some products or subscriptions to sell. It is not necessary to go into the detail being discussed on these sites as readers can just Google and read the results.

What is interesting are the comments about the value, or otherwise, of AI today. They reminded me of the comments about new technology over the years and so I wondered how today’s comments compare with those made about the Internet, social media, and early AI.

Whilst not exclusive, the following historical outline with quotes captures an essence of the stages of development and maturity of the technologies and application.

It was in 1956 that the term “artificial intelligence” was coined at the Dartmouth Workshop. The Dartmouth Summer Research Project on Artificial Intelligence was a workshop widely considered to be the founding event of artificial intelligence as a field. The project was essentially an extended brainstorming session. “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” (John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, Dartmouth Workshop proposal, 1956).

The challenge that they faced was the technology was not available to follow through their ideas. The rest of this section discusses the birth and development of some of the technology that has enabled the ongoing work around AI to move into the public domain and gain the influence that it has today. A pattern around the quotes of the time seems to emerge; what is considered a great idea at the time of introduction often develops into something quite different, will the same be said of AI in the future?

Let’s move forward to 1969. Most people today have probably never heard of Arpanet. Arpanet was the Internet’s precursor, launched in 1969. At that time, the media coverage was limited, and the significance of this event wasn’t fully grasped by the general public. However, the following quote captures the essence of the initial vision: “We are talking about an entirely new and very different way for people to communicate with each other.” (Leonard Kleinrock, one of the developers of Arpanet).

The 1980s was the period of expansion and commercialisation of the internet. As the Internet expanded and protocols like TCP/IP became widely adopted, the media started to take notice. However, the commercial potential wasn’t immediately clear to everyone. “Imagine, if you can, sitting down to your morning coffee, turning on your home computer to read the day’s newspaper. Well, it’s not as far-fetched as it may seem.” (Christopher Kent, 1981, in a report about online newspapers).

During the 1990s the World Wide Web became the buzz word and led to mainstream recognition: The advent of the World Wide Web in the early 1990s marked a turning point, attracting more attention from the media. Here’s a quote capturing the excitement: “The Internet is like a super library where you can go and find any book you want and read it for free.” (Anonymous, early 1990s). Those were the days before Amazon came on the scene in 1995.

The introduction of the Mosaic web browser in 1993 played a pivotal role in making the World Wide Web more accessible to the general public. “Mosaic makes the Internet look like nothing special, but it’s one of the most powerful tools in the history of information.” (Newsweek, 1993). Wow, how times have changed since then.

As e-commerce started to gain traction, the media began to recognise the Internet’s potential for business. “The Internet is not just a technological innovation. It’s a cultural revolution.” (Bill Gates, 1995). I wonder if he really understood just how much of a cultural revolution it would be and how this has changed societies across much of the world.

However, it was not all roses and concerns about work overload started to emerge as more companies engaged in its use. “The Internet is drowning us in information but starving us of wisdom.” (Author, 1995). Remember, this was before social media and woke behaviour.

The late 1990s saw the dotcom boom, when many Internet-based companies were founded. This era was characterised by both enthusiasm and scepticism. “The Internet is transforming business. Amazon.com is a great example of how this online business model is changing the landscape.” (Various media sources, late 1990s). “Investors should be cautious as many dotcoms are operating at a loss and may not be sustainable in the long term.” (Financial analysts, late 1990s). These words were indeed prophetic as the 2000 dotcom bust and economic fallout caused chaos to the markets. “The dotcom crash is a wake-up call, signalling the dangers of speculative excess and the fragility of the Internet economy.” (Financial analyst, 2000).

By 1997 the Internet became more ingrained in daily life, discussions about its impact on society emerged: “The Internet is changing the way we live and work. It’s not just a tool; it’s a cultural and social force.” (Time magazine, 1997).

The turn of the century (2000) saw the bursting of the dotcom bubble. This event prompted reflections on the viability of Internet-based businesses: “The dotcom boom was marked by excess and speculation. The aftermath is a sobering reminder of the risks of the new economy.” (Various financial analysts, 2000).

During all this time, people beavered away at trying to improve AI and bring it into the mainstream. After initial enthusiasm, the field experienced periods known as “AI winters,” marked by reduced funding and progress. “The promise of AI has often outpaced the reality. It’s a field with great potential, but we must approach it with a realistic understanding of its current limitations.” (AI researcher, 2001). It took another ten years before AI really hit the media headlines.

Facebook burst into life in 2004 and quickly gained popularity on the US college campuses. “Facebook is an online directory that connects people through social networks at colleges.” (Mark Zuckerberg, 2004). Little did people know at this time that Zuckerberg would become a miner (of data) who would change the lives of millions as well as making his own millions.

Around 2007 people experienced the rise of social media. The emergence of social media platforms brought new dimensions to the Internet. “Social media is transforming the way we connect and share information. The Internet is no longer just about websites; it’s about communities.” (TechCrunch, 2007).

In 2009 Twitter introduced real-time communication for the masses and quickly became a prominent platform. “Twitter is not a triumph of technology; it’s a triumph of humanity.” (Biz Stone, co-founder of Twitter, 2009). Maybe it would now be called a distortion of truth and reality through the power of technology.

From around 2010 the role of social media in facilitating social and political movements became a topic of discussion: “Social media is giving a voice to the voiceless and power to the powerless.” (Anonymous activist, 2010).

It didn’t take long before people were voicing concern about privacy and social media, particularly as social media platforms expanded across a wide range of social and political arenas. “As our dependence on the Internet grows, so does our vulnerability to cyber-attacks. We need to be prepared for the dark side of connectivity.” (Security expert, 2010).

“We’ve traded our privacy for the convenience of staying connected. The question is, was it worth it?” (Tech analyst, 2013). The victims of catfishing are pretty clear on that particular question; not everything you see, read, or hear is the truth.

In 2011 AI bursts into the media headlines with IBM’s Watson. The computer system was initially developed to answer questions on the quiz show Jeopardy! and in 2011, the Watson computer system competed on Jeopardy! against champions Brad Rutter and Ken Jennings, winning the first-place prize of 1 million USD. Watson soon had a setback, and the media was equally loud in reporting this, which raised public awareness about AI even more. The headlines even indicated a touch of concern about AI, “Watson supercomputer was defeated in Jeopardy by lone physicist — long live humanity! IBM’s Watson supercomputer might have crushed the puny likes of Jeopardy! champions Ken Jennings and Brad Rutter, but it was no match for former physicist and New Jersey congressman Rush Holt.” (Venturebeat.com). Even AI cannot win them all.

As AI progressed and became embedded in many business systems, particularly in the automation of simple work operations and basic office systems technology, concerns began to surface in the media. 2015 was when discussions about the impact of AI on jobs and automation gained prominence. “AI and automation have the potential to transform industries and improve efficiency, but they also raise questions about the future of work.” (Economist, 2015).

Around this time, social media was raising other concerns as fake news and questions about the reliability of the information was discussed. “Social media has become a powerful tool for information dissemination, but it also raises questions about the reliability of the information we consume.” (Media expert, 2016). “Social media is a breeding ground for fake news and misinformation. It’s eroding the foundation of an informed society.” (Media critic, 2016). Now, some seven years later and after many promises by the controllers of social media, nothing has really improved although lots of promises were made and tame excuses given.

In 2016 AI hit the headlines again. Deep Mind’s computer victory by “AlphaGo” against world champion Lee Sedol was seen as a significant achievement in terms of its ability to use an artificial neural network. The fact that it lost most of its games in the preliminaries seemed to have been lost in all the triumphant media.

By 2018, the massive growth and use of social media led to a growing awareness of its potential impact on mental health. “Social media offers connection, but it also brings challenges to mental well-being. Striking the right balance is crucial.” (Psychologist, 2018). “The constant comparison, cyberbullying, and addiction issues on social media are contributing to a mental health crisis, especially among the youth.” (Psychiatrist, 2018).

This focus on mental health and well-being seemed to drag AI into the discussions focused on ethics and bias in AI. Maybe this was because past experience indicated that new technology in society was something to be wary of; after all unemployment had already been raised as an issue and caused some stress in sections of society. “As we develop AI, we must ensure that it reflects our values and is free from bias. Ethical considerations are paramount.” (AI ethicist, 2018). “The use of facial recognition technology raises serious ethical questions about surveillance, privacy, and the potential for abuse.” (Civil liberties advocate, 2019).

On a more positive side, AI is making great strides in supporting the health care industries with an increased focus on using AI for tasks such as medical imaging analysis, drug discovery, and personalised medicine.

It was this development and use of AI in the world of medicine that was brought into everybody’s mind when it was used to develop vaccines to combat the Covid-19 pandemic that started in 2019. To this day, the debate rages on as to the ethics of the process and its impact on individuals, both positive and negative. This discussion epitomises the debates around AI and the ethics of its use in society and there are various versions of what people believe to be the truth.

Unfortunately, mankind has been at war throughout most of its history on the earth and AI is taking this to another level. “The development of autonomous weapons powered by AI poses a grave threat to humanity. We must carefully consider the consequences of unleashing such technology.” (AI ethicist, 2021). Unfortunately, this is still being rehearsed around the world today with numerous versions of the truth of such events.

As space and time is running out for this newsletter I will now move forward quickly and place a wide range of quotes in the indented section that follows.

Some of the more recent discussions have led to comments such as: — “The progress in AI is astonishing. It’s helping us solve complex problems and enhance efficiency in ways we couldn’t have imagined a few years ago.” “The rollout of 5G is a game-changer. It’s not just faster internet; it opens the door to innovations we haven’t even dreamed of yet.” “E-commerce is transforming the way we shop and do business. The convenience it brings, especially during challenging times, is invaluable.” “Remote work tools have revolutionized the workplace. They’ve given us the flexibility to work from anywhere, making life more manageable.” “Social media has allowed me to connect with people from all over the world. It’s a powerful tool for building communities and sharing ideas.”

On a more negative note, they are saying, “While AI is impressive, the rapid automation of jobs raises concerns about unemployment and the need for reskilling in the workforce.” “The rampant spread of misinformation on social media is a serious threat to society. It undermines trust and can have real-world consequences.” “The more we rely on the internet, the more our privacy is at risk. We need to address the balance between convenience and protecting personal data.” “Social media can be a double-edged sword. While it connects us, it also contributes to feelings of inadequacy and anxiety. We need a healthier online culture.” and “Content moderation is an uphill battle. Striking the right balance between allowing free expression and preventing harm is an ongoing challenge for social media platforms.”

­­­­­­­­

The Internet ecosystem

The pattern of responses to the changes of the role and activities within the Internet ecosystem over time indicates a polarisation of positive and negative discussions. The one thing that everybody can agree is that over the last ten years everything has changed, both in the volume of its use and the balance of the companies that make up the Internet ecosystem. We have witnessed the move from an Internet based on multitudes of websites selling their wares supported by a wide range of providers, to an Internet that is dominated by a very small number of companies and providers; the balance of power and access has shifted. The power of the Internet is in the hands of just a small cohort of very wealthy people who seem immune to the concerns that society has about the impact of the technology that the few controls.

There has been increasing demand for legislators to rein in this power and take control of the few, known as the “hyperscalers”. Some say it is too late, yet the EU responded with the Digital Markets Act (DMA) adopted on 5 July 2022, which aims to create a safer, fairer, and more transparent online environment. Others need to follow.

Insofar as AI is concerned, the recent debacle around OpenAI discussed earlier is a clear indication that the stakes are high when it comes to power and control in the AI market. I suggest we are witnessing the beginning of another war on the Internet where high capital investment will push out the small fry and let the same big organisations dominate.

The response by legislators is as polarised as the voices of the clients. The US, EU, and China are taking different paths. Research by Brookings indicates that the “EU and U.S. strategies share a conceptual alignment on a risk-based approach, agree on key principles of trustworthy AI, and endorse an important role for international standards. However, the specifics of these AI risk management regimes have more differences than similarities.”

“Regarding many specific AI applications, especially those related to socioeconomic processes and online platforms, the EU and U.S. are on a path to significant misalignment.” I suggest that this does not bode well when two important countries in trade and influence such as the US and UK take different paths in such a significant subject that will impact on future prosperity. Which model will the other emerging giants such as India commit to?

Whilst this difference is debated, China has a different approach as it combines national-level, provincial, and local regulations with an emphasis on upholding state power and cultural values. We could write a book on the impact of AI on Beijing and Moscow, maybe that’s for another time as unravelling the truth will be a minefield in itself.

It is not just AI that needs regulating, there is also a need for the outdated original rules and ‘laws’ of the internet to be reviewed and re-written for a modern global society. Unfortunately, the behaviour of legislators fumbling around AI regulation does not bode well for this to happen to the Internet any time soon.

Bringing this discussion to a conclusion, I suggest that there is a pattern of media and individual responses to the introduction of new technology entering the public domain as products and services.

Much of the positivity is based on truths, half-truths, and downright fantasy and the negativity gets lost in the media depending on its own bias. After a period of time, opinions change as the new technology becomes a tool for certain members of society to use to gain power, control, and wealth. Looking back through history well before the Internet and AI, this pattern was clear, and I discussed it in a previous publication and discussion forums. The difference today is that the technology has had a global impact at the individual level like no previous technology. This alone means that truth must prevail.

To protect their interests, the major wealth holders have managed to avoid the truth of their impact and continue to take control of events as new technology evolves and is introduced to society, just as they are with the emergence of Gen-AI.

As an eternal pragmatic optimist, my research into the history of organisations suggests that there is a limit to what society will tolerate. The great conglomerate Standard Oil is but one case in point, which legislators in the US toppled against all the industrial might that the conglomerates threw at them at that time.

--

--

Dr. Ross Wirth
New Era Organizations

Academic & professional experience in organizational change, leadership, and organizational design.