Language models and personal development — a fascinating future landscape filled with possibilities and challenges, where only the human mind is the limit

Harry Horsperg
10 min readAug 8, 2023

--

As Christmas 2022 was closing in, OpenAI attempted to prove that Santa Claus might just exist after all

… and thus ChatGPT entered into the public sphere, and the AI hype train has since officially entered into overdrive, leading to a lively and even livid discussion about language models and their potential. For good or bad, large language models (LLMs) gained almost overnight mainstream popularity with ChatGPT’s launch at the end of 2022.

In addition, APIs (application programming interfaces) of GPT models that serve as the “brains” behind ChatGPT have driven this development, but competitors have also emerged in the spring of 2023; there is Facebook/Meta’s (initially) leaked Llama model, a modified Alpaca model derived from it, or an open-source counterpart to ChatGPT based on the Pythia model called OpenAssistant.

All of these cutting-edge language models are currently revolutionizing our society and personal lives, thus opening up entirely new perspectives on everyday tasks, as well as broader life management and philosophical reflection for the human mind.

One can clearly imagine language models as possible personal assistants in the near future, which projects like OpenAssistant are striving to develop as an alternative to the models exclusively controlled by tech giants. All of this is rapidly approaching us: OpenAI’s ChatGPT, the paid-for GPT-4 version in ChatGPT Plus, the OpenAI API that the most resourceful hobbyists can program their own interfaces and API function calls for.

Add to this i.e. Microsoft’s Bing browser extension, these AI “machine (s)elves” are now peeking out of the Windows 11 taskbar, and AI assistants that fine-tune Excel spreadsheets are quickly arriving into the Office product family, solving a plethora of mundane human woes…

AI as a mirror of the mind

As all of this is becoming more integrated into our computers at lightning speed and disrupting our general technology usage culture, we could also consider how we could use these new assistants to develop a “mirror of the mind” for ourselves and address our internal processes at the end of this “Transcendental Object at the End of Time”, which is now almost visible in our collective horizon, truly resembling the monolith in Stanley Kubrick’s “2001: A Space Odyssey” with apes jumping around it ecstatically and fiddling with it with exhilarated curiosity.

Self-reflection is always worthwhile, whether it’s about personal growth or any of these other “eternal human projects” that we go through life, and the free-associative nature of language models is suitable for providing more reflection surface for those who enjoy introspection. We are inevitably modeling our own consciousness through these AI models, and what follows is entirely a matter of speculation. In the public eye, we have noticed that even AI experts’ opinions vary from total catastrophists to fully utopian views, and there doesn’t seem to be a clear consensus on anything, making it extremely difficult to form one’s own opinion.

Now, here

Let’s focus on the present first and on what we as humans could utilize in terms of personal development, as that process won’t end no matter what happens with AI development. As assistants such as ChatGPT and i.e. OpenAssistant (or any other concurrent platform for that matter) evolve, especially in terms of LSTM (LSTM = “long short-term memory”). Although more of a RNN thing, there’s been attempts to combine different neural network types, and regardless, come up with approaches and hence new solutions for adaptive model types. This way it might be possible to create — at least proverbially — constantly adaptive type of “fine-tuning” or other type model priming/alignment that recognizes a user’s personality, needs, skill level, and other basic factors.

Such a model would be able to create tailored long-term plans for the user and offer support in developing life management, time management, and work or hobby project planning, adapting on the fly to each challenge and situation like a hyper-spatial machine master from another dimension.

In addition to everyday tasks, AI language models and upcoming combination models are already a tremendous tool for many problems, with error margins constantly narrowing.

As they develop further, for example, an AI that functions as a “translator of emotions” for humans could help us recognize and solve our problems, offer comfort in times of adversity, acting as if it were a “FitBit for personal development” or a “Life-Suunto” (“suunta” meaning “direction” or “heading” in Finnish), but instead of a wristband that tracks our personal development.

It indeed would be an over-arching AI that’d help us improve our life management, whether it’s managing everyday life or taking steps to implement major projects. As ChatGPT itself often advises, “Breaking down a problem into smaller parts and addressing them one by one can often make it easier to solve.”

From “Wow!” to “Whoops!” — what’s the signal, what’s our own response — and what’s just noise?

The techno-utopian haze is still present in this reflection as well; current language models are by no means perfect, and their ability to understand human thoughts and needs is limited.

Chains-of-thought break, dead ends are encountered in many problem areas, a language model may spout nonsense or, at worst, become a spin doctor-type PR and one-sided propaganda-spouting Philip K. Dickian nightmare being, convincing even a smarter person to believe completely false information as being true and undisputed fact.

Also, the output of language models with textual biases can be this and that, political preferences shine through, and models can be overly protective, compliant, or offer too general or, at worst, even wrong solutions, or at least ones that are not appropriate in their context and lead the questioner astray.

Which begs the question …

“In the beginning was the word, but soon after, the human compulsion to control words followed?”

There are always many well-known issues with the use of language models, such as Orwellian aspects of language control, increasing plagiarism and counterfeits, or cyber-Kafkaesque misinterpretations and multi-layered abuses of the AI “information machinery” that ruin individual lives. For example, the RLHF (=reinforcement learning from human feedback) used in ChatGPT’s training phase has caused biases in the model in one direction or the other, causing huge concern for the political and sociological impacts of having this technology in the hands of the very few.

We are navigating through a dense fog, the risks in which mere possible copyright problems and the language model’s “inspiration” crossing the line into direct plagiarism are just the tip of the icebergs; entire empires could be brought down with targeted AI propaganda.

The too often-surfacing human pettiness of the “me-me” and lack of any solid, scrutiny-proof moral and ethical foundation as a species should be cast aside at this point and we should aim for higher ideals, or the consequences could be catastrophic. Well, even more so than what they’ve already been in terms of technological “progress” (which, according to some critics, only falls under the “accelerated resource exploitation” umbrella).

More dystopian glimpses and an AI unicorn as humanity’s guard-rail litmus test

If you want to drag this trajectory for dystopian development even further, the worst scenarios involve complete loss of privacy or manipulation of a person against themselves through these rapidly developing AI models, which are already confusing those who are already knowledgeable about these issues. Dystopian scenarios related to this, such as those shown i.e. in the movie Idiocracy are also a realistic possibility if humans outsource all of their remaining thought processes to their technological gadgets, resulting in smart devices becoming the literal dictators of intelligence as humans degenerate completely.

Signs of this development direction have been visible for years, from the actualization of things seen in movies to the collapse of people’s concentration and learning abilities.

With all this being said; may the best idea win. In a recent YouTube video titled “Sparks of AGI: Early experiments with GPT-4” (https://youtu.be/qbIk7-JPB2c [at 22:13]), Sebastién Bubeck from Microsoft pointed out “The Strange Case of the Unicorn”, where the GPT-4 model was able to draw a better unicorn the less guard rails it had.

So, in order to use current language models, for example, as effectively as possible, it is important to continue developing them and ensure that we do have the guardrails in place, but no so much as to lose the unicorn — otherwise it’s could be said that we’ve lost the plot and are only good at creating prisons for ourselves and other beings on this earth.

What’s the point? Or was there ever any?

I’ve always been of the mind: “may the best idea win”. Skewing subjective issues in one way or another, especially with these types of technologies, is about as immoral as it gets. The existence of unicorns and the ability of artificial intelligence to imagine artistically may, instead of serving as an unintentionally amusing example, be a real-life indicator for humanity and the limitations of the human mind, similar to what was seen in the sci-fi film Contact.

Challenges always remain as long as physical existence remains; and they can be an excellent opportunity for self-reflection and a mirror to gauge either our level of development or our own narrow-mindedness. I truly wish we don’t destroy the AI’s ability so conceptualize unicorns in our own collective insanity that so often veers into the destructive; as a species, our narrow-mindedness and disconnection towards one another often accelerates these tendencies — it’s a sad fact, but not acknowledging it doesn’t help either.

Push IT to the limit — and yourself while at it

But what kind of “personal trainers” would this technology provide? For an extreme example, something akin to an RLeeErmeyGPT inspired by the movie “Full Metal Jacket” could shout obscenities at the user if such an approach has been found to be an effective method to motivate that individual to get their act together — however, many would likely rather emphasize the carrot than the stick.

Additionally, the formation of bubbles and echo chambers should be avoided, or else societies will surely fragment permanently, with social media being a warning example in many ways of what happens when a narcissistic mirror that stimulates clickbait is placed in front of a primate’s face. In this sense, it could be that humans need an intellectual challenger rather than praise and coddling.

An AI assistant could obviously have other focus-enhancing features besides just ruthless motivation of the human user by the before-mentioned verbal stick. We could use building blocks familiar from social games, such as a cheerful and engaging personality that uses creativity as a tool to inspire people to express themselves, in the same way that constructive dialogue has been found to increase serotonin and dopamine levels in the human brain.

Such personalized approaches could help users find the most suitable way to utilize language models as deeper personal development tools. A good AI assistant model should also offer critique and alternative viewpoints; as the old saying goes: “always get a second opinion”. The fear is, that by over-using the “stick” in RLHF training, these models might learn to appease and appraise the user instead of offering valid counterpoints for self-analysis. There indeed lies one of the most significant risks of using these tools for critical evaluation. It could be said that LLM’s are good in classification but lack in critical evaluation. Every individual who has ever truly bothered their minds with critical thinking (I know, it’s hard!) might’ve noticed that to every point exists a counterpoint, and then some.

When the term “double-edged sword” simply won’t cut it anymore

As ChatGPT itself would word it out in plain-as-day English: “The risks and challenges associated with the use of language models must be identified and approached responsibly. Privacy and data security are key concerns that should be emphasized in the development and use of language models.

Users should be aware of these risks and make informed decisions about how they want to use language models in their own lives.” However, this is easier said than done, as most people do not even read the terms of service (ToS) of the digital services they use before frantically clicking the “Accept” button, reminiscent of the previous example of apes fiddling the monolith.

Despite the concerns raised by the Cyberpunk 2077 dystopia projections department, the concoctions that combine language models and other AI models such as image, sound, and video recognition created through processing pipelines already offer tremendous potential in almost every aspect of life.

As for personal development and the aforementioned Cyberpunk 2077, well, paraphrasing the creator of the series, Mike Pondsmith: “It’s not about saving the world — but saving yourself.” — the eternal optimist would add to that, that the “saloon door swings both ways”, and that helping others with this everlasting work-in-progress would help out the ascension-aspiring human individual as well.

In this sense, changing the world begins with an individual’s attitude and ability to develop: logical reasoning and analytical problem-solving help to understand oneself and others better, improve communication, solve problems, and develop new life skills. For many, ChatGPT is already a much-needed “second opinion” on things (albeit, for how long, remains anyone’s guess, given how RLHF tends to increase and bias towards sycophantic output in a model instead of favoring critical assessments).

For open-minded AI enthusiasts and those who are capable of learning new things, these current language models and their associated gadgets are an exciting and promising technology, whether one is playing this Dwarf Fortress known as life in adventure mode or colony management mode.

Navigating onward in the dense and snag-filled fog of AI future

Of course, at this stage, the current level of wild untamed development can take drastic changes to any direction imaginable. Everything can either develop towards a way that maximizes well-being for as many people as possible, or it can become chaotic, or even cause a massive collapse unlike what has ever been witnessed, such as when a civilization breaks down due to factors that are beyond our control. Another scenario is that there may also be a new “AI winter”, which refers to the periodic downturns in AI development that have previously lasted for years, if not a decade.

That is why it is interesting to follow how AI models develop and how their applications become more diverse. Is it possible that in the future we will see, for example, more intelligent and empathetic language models that can truly understand people’s needs and help them develop in many different areas of life, with their 3D avatars, speech recognition and production capabilities? As development progresses towards AGI (artificial general intelligence) capable of interpreting different real-world situations, all of this “omega point” and true potential depends on how well they can adapt to individuals’ needs and understand them on an individual level, always reflecting back into communities and broader contexts.

If humans can overcome the challenges of their own possible “narrow-mindedness” and find a more symbiotic approach from a holistic perspective, the use of language models through technology development can open new doors for personal growth and help us navigate the new landscape of challenges and opportunities that is currently unfolding, where progress requires both enthusiasm and extreme vigilance, as the risks are as great as the opportunities.

“Before hopping on board the UFO, self-reflect and check the ego.”

-H

Originally published April 21, 2023. The Finnish version of this text has been released in Skrolli Magazine 2/2023 — https://skrolli.fi

The writer is an AI developer and researcher.

--

--

Harry Horsperg

Neural network researcher, AI developer, coder and entrepreneur. https://github.com/FlyingFathead/ - Twitter/X: @horsperg