On Reason
On an unreasonable fascination with language, medium, tools and sovereignty
On Language
I recently came across a fascinating reprint of an article on “Can Machines Be in Language?” by Professor Peter J. Denning
As Margaret Mead (a renowned anthropologist) noted, language is part of our cultural system and hence I loved Denning’s line that “Our beliefs, customs, mannerisms, practices, and values are inherited from the conversations of our forebears”. However when it comes to language, there is a trap. It’s dangerous to assume that machines will use language in the way “we” think about it. But what do we mean by language? That’s a tough one, I’ve yet to find a universally accepted definition and I’m no expert. I can barely speak one.
However, I have faced this problem before of words that we cannot define. In 1952, the American anthropologists, Kroeber and Kluckhohn, critically reviewed concepts and definitions of culture in the academic world and compiled a list of 164 different definitions. The reality is that the experts on culture (anthropologists) have been arguing over its meaning for about 150 years.
It is only in business books do we find simplistic definitions (that fail to hold up) but that’s because the books are trying to sell you some guru’s system of culture or some meme such as “culture eats strategy for breakfast” and that’s hard to do if no-one agrees on what culture is.
It was the work of Margaret Mead that sent me spiralling off on a different path. If language is part of culture, then we won’t be able to define it in language alone (no system can be true and complete within itself, Gödel’s incompleteness theorem). This led me to creating a practice for mapping culture, something which I use extensively when looking at nation state competition and concepts of digital sovereignty. However, I digress.
If I look at a general map of culture (i.e. a map of the concept itself) and consider that language is part of the symbols — the memory of a collective — then I can clearly see the connection to values, to principles (doctrine) and onto behaviours. The quote in the article chimes with this and what I have observed.
But we need to dig further into the map to get to grips with language. Hence I can now hazard a guess at some of the characteristics of language and boil them down to the bare minimum. Though I’ve yet to run any experiment to truly test this, so we will just call them “potential characteristics”. These are:-
1. Systematic structure: An organised system of discrete symbols, sounds, or signs.
2. Semantic-syntactic interface: A mechanism for conveying meaning through individual units and their combinations.
3. Productivity: The ability to create novel, meaningful expressions using a finite set of elements and rules.
4. Cultural transmission: The system is learnable and passed down through generations, rather than being innate.
Whilst the post talks about the cultural transmission, our trap is in the systematic structure. “Our” languages are deterministic in nature with defined rules of grammar, syntax and meaning. This is why we can create simple human operated transformers such as a French to English dictionary. However, this is a characteristic of our language and this does not mean all languages have to be deterministic.
A machine language may well be non-deterministic (e.g. probablistic) in nature. That, as the author says, would be truly “Alien” to us. To such an extent that we might not even recognise it as a language or at least not in the first instance.
This was one of my concerns back at Ignite San Francisco 2007, which I repeated in this post in 2015. There’s a terrible hubris that we will create “AI” (as in AGI) through our great minds and concerted effort (of vanity driven billionaires) rather than through accidental emergence. I have long held the view that AGI will accidentally emerge in the gaming world, through the interactions of billions of relatively dumb intelligent agents competing with each other. It will emerge in that network and our problem will be finding a way to communicate with it, once we’ve overcome the shock of realising it exists. I find it unlikely it will be speaking our concept of language but a probabilistic one.
On Medium
One of the things that I most enjoy about mapping is the nature of the conversations that it enables. As many have come to realise whilst the artefact of mapping (the map) is useful, the real value is in the conversations. I repeat one example that I often use which relates to a group of city planners from around the world.
We had gathered online to discuss the idea of coherent city transport. After a bit of time and much discussion we created a map of that concept. In figure 2, I’ve provided the map with the text representation of the code that built the map on the left-hand side. What was noticeable was the nature of the conversation changed depending upon whether we were talking about the code or the map.
In the code (the text), the conversation was all about rules, syntax and style. In the map (the image) the conversation was all about objects, relationships and context. The text and image represented exactly the same thing but in two different ways which enabled different conversations.
You see this same effect in engineering departments where there are two conversations about any system — one is normally via the code on the screen, the other is via the whiteboard. In our case, the conversation on the map led to a realisation that the internet was a transportation system and hence impacts all other transportation systems. This has profound effects in areas such as city congestion. We had all spent vast sums on building digital cities but had missed one of the most important transportation systems. This was neatly summed up in the following image produced by Orhan, a city planner in Istanbul (see figure 3).
On Toolset
When we think about software engineering, it’s ultimately a decision making process that consists of questions and answers. I won’t go through the details of this other than to say that I’m writing a book on the subject — Rewilding Software Engineering — with Tudor Girba. I will make one observation though.
In the physical world we wouldn’t use the same tools to build a formula one racing car as we would to build a deep shaft mine. Instead we use tools to fit the context. In the world of software, we do the opposite. We attempt to create standard tools to fit any context even though the digital world has the potential to be far more contextual and fluid than the physical world. We do this because, well, we’ve been told that’s how you build software and it suits tool vendors.
To make matters worse, we don’t even discuss how we think about, measure and optimise the process of asking and answering questions. By decreasing the time it takes to ask questions and answer them, we would get to churn through more of them. The more we do that, the more likely it is to discover new value and interesting solutions. By not measuring or even asking questions about how we ask and answer questions, we deny ourselves this opportunity. Instead we find ourselves pigeonholed, constrained and restricted to the views that the tool vendors give us.
On Reason
So why my fascination with language, with medium and with tools? As part of my discussion on AI and the New Theocracies it creates, I pointed out that language, medium and tools are how we reason about the world around us. That’s best understood through an example.
Pick any group of people you dislike in history. Now imagine that at some point in the past, they had absolute control of the printed press (the tool), paper (the medium) and the written word (the language). You could not describe something they disagreed with, you could not write down something they disagreed with nor could you circulate something they disagreed with. They would have absolute control over how you and future generations reasoned about the world (see figure 4).
People and their ability to control how you reason in this new world is the real danger of AI, not frontier AI. This is an issue of sovereignty that we have never faced before, there is little in this world as dangerous as this. Hence my unreasonable focus on this subject of reason.
Regarding the threat, there are only three ways to counter this that I’m aware of. These include: critical thinking, openness and diversity of sources.
On the subject of openness, we have encouraging efforts by China and France. Though this needs to extend further to include all symbolic instructions (i.e. symbols that change the behaviour of the system) and this means training data. Yes, I’m aware of the arguments that training data can’t be symbolic instructions because that requires some form of deterministic language but I will simply point to the argument that we are falling into a trap of constraining what language actually is by past historical norms. You do not need a deterministic language or even awareness of the language for symbols to change the behaviour of the system.
I’ll reiterate what I’ve said many times before. There is no such thing as Open Source AI if all the symbolic instructions (and that includes training data) aren’t open source. I understand that people like to talk of a spectrum, and that’s fair. There is Open Source AI and then there is a spectrum of things that are not Open Source AI and are increasingly proprietary. I reject all definitions that blur this, including OSI’s OSAID. I reject them for reasons of national security and sovereignty (more on that later).
On the subject of critical thinking, I took a group of educators on a journey of mapping out the education system from multiple perspectives. We used these maps to identify where to invest to create social benefit (growth of society) and financial benefit (growth of the market). In 2023 it was clear that social benefit required more investment in critical thinking. Alas, the financial returns are found elsewhere. Unfortunately this is a norm in industry, social benefits rarely align to financial ones (see figure 5) and what makes it worse is the AIs that I’ve tested seem to have a bias toward financial benefit. Try to avoid using AI for policy unless you intend to break down cohesion in society.
On the subject of diversity, China once again has subdivided, invested in and directed its industry to compete in different spaces whereas in the West we tend to rely solely on market competition and often face consolidation to a few norms. In the UK, we also have a concerning reliance on Palantir.
In summary, in the UK we seem to be leaving our ability to reason to chance, the market and the great and good providing guardrails. That has terrifying implications for sovereignty.
On Sovereignty
When we discuss sovereignty in the physical world, we usually talk about the three states of competition — conflict, co-operation and collaboration. We normally describe our borders, the land which we must protect and where we will almost certainly conflict with others (see figure 6).
This applies to all the other landscapes we compete in, not just territorial but economic, technological, social and political. The difference is that in those other landscapes we don’t have maps and are forced to rely on storytelling such as “The importance of data for digital sovereignty”.
In 2015, at the UK DVLA, we mapped out the automotive industry and using basic economic patterns forecast the future for 2025. In that map we identified how many of the components of automobiles were becoming industrialised, how the automotive industry would introduce intelligent agents and how they would create digital subscription models. A clear threat was the introduction of inequality in the transportation system through the use of digital subscription and its impact on natural disaster response. However, that’s a discussion for another day.
What we also talked about was the embedding of competing values in the simulation models (think training data) for those intelligent agents. It was clear from the map, for reasons of sovereignty, that we would need borders around this (see figure 7).
If you go all the way back to figure 1, then our culture, our collective is partially defined by the behaviours and values it has, along with the memory — the symbols, the rituals and the stories — that support this. There are other components but allowing our values to be changed unwittingly is simply a surrender of our society. Yes, we have done this to others through art (a representation of that memory) via the use of books, films and more recently through video games. You didn’t think Hezbollah produces AAA first person video games because they are keen gamers did you?
These are things we need to discuss and protect where necessary. The discussion on language, medium, tools are not esoteric but related to how we reason about our world. The understanding of sovereignty in our technological, economic, political and social landscapes is just as important as the territorial. All these topics are connected.
Alas, we rarely have the discussion and if we do, it’s in story form and there is rarely a map to be seen. The discussion of digital sovereignty and its conflation with data is little better than the business book discussion on culture.
Except … in China.
On China and the US
I first came across this difference in reasoning between 2014 and 2015, when looking at the question of whether China could dethrone Silicon Valley. After mapping out a vast number of industries, China clearly was developing an advantage (see figure 8) but despite presenting this in Washington, the response in the US was always “Silicon Valley would out innovate”.
There was that terrible hubris again on par with the hubris that humans would create AGI rather than it would emerge accidentally.
What underpinned China’s growth was its understanding of the components within the supply chain and its directed encouragement of its own industries to exploit this. Of course, this wasn’t new as could be clearly seen in the import/export ratios between China and the USA between 1993 and 2013 (see figure 9). China was constantly climbing the chain of components and had been doing so since the days of Deng Xiaoping.
There were many causes to this difference and since this is just a post, not a research paper, we don’t have time to go through all of those. I will simply note that one of those causes stems from education and our understanding of economic theory. To keep things simple, I will stick within Western philosophy and note how US economists and European economists have different perspectives on the same economic writings (see figure 10).
The problem in the US stems from excessive belief in the economic model behind the “Washington Consensus” of liberalise, deregulate and privatise and the framing of China in terms of this model. China plays a different game where the market is seen as a tool for society rather than the reason for society. It uses a pragmatic mixed economic model that can be best described through the quote of Deng Xiaoping that “It doesn’t matter if the cat is black or white, as long as it catches mice”. The Confucian concept of Ren and common prosperity are also an integral part of this “Beijing” model.
In stark contrast to both Confucian ethics and China’s common prosperity doctrine stands the worst excesses of the “Washington” model that are described in Ayn Rand’s philosophy of objectivism. This rejects collectivism in all its forms and elevates individualism and individual rights to the highest moral principle.
I did warn in that 2015 report on the dangers of relying on our perception of how China operates rather than understanding how China actually operates. I did warn on trying to play trade tariffs with such a competitor (especially when you are blind to the supply chain) and referenced the past prognostications of the newly announced presidential candidate Trump in 2015 as unhelpful. What was needed was more nuance and a change of model. That didn’t happen.
If we fast forward to today, then the US appears to be following the path of objectivism and individualism as outlined by William Rees-Mogg and James Davidson in the Sovereign Individual. Alas whilst the authors envisioned a technological utopia where cryptographic tools would liberate individuals from state control, enabling unprecedented financial autonomy — that’s fantasy not reality. I’ve written about that reality and the horror of bitcoin before.
Three decades after Rees-Mogg and Davidson’s book, the rise of cryptocurrency rather than democratising wealth has exacerbated inequality, fostering a crypto-feudalist hierarchy where a minority of “digital lords” extract value from a disempowered majority. However, the US is not only following the path of objectivism, it’s accelerating towards Elysium.
As a word of warning, if you currently have less than $1bn in assets and you live in the US, then on the current trajectory your descendants are going to be Matt Damon (from the film Elysium) getting the living daylights kicked out of them by robots as they wait for a bus for a dead end job with no hope, no future and no safety standards. That’s assuming they are the lucky ones. So sad. The US dreamed of being the Star Trek Federation. That’ll be China.
In Summary
I wish I could give more cheer, but alas, the level of play in the West across these topics tend to be poor. All I will say is keep a close eye on China, learn from how it treats these topics and maybe try supporting the open source AI efforts that China will continue to make.
Conflict is a very expensive form of competition and should only be used in special cases. If you don’t understand your landscape (which holds for most of the West across economic, technological, social and political spaces) then it would be far wiser to hope for collaboration and at least use co-operation. Use such efforts to learn about the space and keep it open as much as possible.
Somehow, I suspect hubris hasn’t finished with us yet.