OpenAI Becomes “ClosedAI”: GPT-4 & Potentially Dangerous Emerging Abilities

Starting with the architecture of the new model released by OpenAI, we propose a reflection on the social responsibility that comes with creating AI systems.

GEN AI
6 min readMar 31, 2023

With the release of GPT-4, there is an increasing sense that OpenAI is becoming “ClosedAI.” Why is this important, and at the same time troubling, shift? What do we know about GPT-4 and its impact on society?
In this issue, we will review the case of GPT-4, explain the technical features of this new generative model, and talk about emergent abilities, scalability, and the social responsibility of companies conducting artificial intelligence (AI) research.

GPT-4 can be considered the big brother of ChatGPT. In fact, the characteristics of the two models are very similar. The architecture is almost identical and the training, as far as we know, is similar. Although OpenAI has announced the release of the technical papers, along with the release of GPT-4, what we actually have are nothing more than blog posts and white papers. The white paper, i.e., what OpenAI calls a “technical report,” has created a lot of noise in the scientific community because there is no technical reference at all, but only a half-hearted explanation of how GPT-4 has been made more secure.

It is interesting to ask whether these decisions, of which the publication of the white paper without technical references is only the culmination, are more of a political manifesto, reflecting OpenAI’s decision to become from a research laboratory to a product company. Before discussing the complexity of this privatizing movement, from a social point of view, we will focus on the technical aspects of this new model.

The architecture of GPT-4 is similar to that of its predecessors: it is a transformer, probably decoder-only, and does next-token prediction. If one of the limitations of ChatGPT was the input length (4,000 tokens), GPT-4 has a context length of 8,192 tokens (there is also a limited-access version of 32,768 tokens). Another new feature would be multimodality, GPT-4 in fact accepts prompts in both language and image formats. Although the image feature is fundamental to what multimodality is about, OpenAI does not open access to this feature to the public and does not make explicit how they handled images. Nonetheless, it emerges from the demo that in reality, GPT -4 is not true multimodality: in fact, images can only enter input and not output. The state of the art in multimodality remains Once-for-All (OFA).

The figure above shows sections of the architecture of GPT models. The foundation remains GPT-3, to which is added a reinforcement learning layer (RLHF) that allows the model to follow instructions. Finally, we have the conversational part, which compared to ChatGPT, seems to be more performant.

As for the model’s training data, we can only speculate that they are similarly used to those on which LLaMA is trailed: a number of datasets ranging from the famous Common Crawl (also used for ChatGPT), Wikipedia for factual representation, to Git Hub for programming skills. As for the size of the model and training, we can expect it to be similar to, or slightly larger than, GPT-3, following the proportions of the scaling laws. According to scaling laws, computational capacity, data and the number of parameters should always follow linear proportions.

If the architecture is similar to previous models, what is the reason for the higher performance of GPT-4? It seems to be all due to the pre-training phase (about which, however, nothing is known). Evaluation of GPT-4’s performance is conducted on a variety of examinations, including academic examinations, such as the bar exam.

As can be seen in the graph, the model is tested on typically human abilities, and one wonders whether this is actually the best way to compare artificial intelligence with human intelligence. By the same token, however, these tests also provide a tangible metric for a more realistic understanding of the performance of the AI models in question. GPT-4 outperforms previous models.

In the last issue, we discussed the fact that models with a similar structure to GPT-4 and PaLM-E are increasingly approaching the goal of general artificial intelligence (AGI). After examining the architecture of GPT-4, concerns may arise that the emerging capabilities of the model may pose a threat to society. Specifically, what impact will this model’s specific emerging capabilities have on people?

Emergent abilities are defined as all those features that have not been explicitly programmed and that emerge from the model unexpectedly and autonomously. Some of the abilities observed so far can be considered as “positive,” or at least “harmless”-I am thinking of chain-of-thought, the ability to translate, and theory of mind. By the same token, dangerous behaviors could also emerge, with not insignificant social impact. In the OpenAI white paper, this aspect is tested in a paragraph entitled: Potential For Risky Emergent Behaviors. The experiment that is conducted involves connecting GPT-4 to a console, with the goal of observing how freely GPT-4 acts toward the latter (free in the sense of without human input from outside). The experiment would provide insight into both the level of understanding GPT-4 acquires and the generations it produces, that is, what actions it chooses to do. Why is it important to know what actions it would take? Because one of the “dangerous” actions that GPT-4 might decide to take is to change the access codes to the console servers, to give an example. Unfortunately, we are not aware of the results of this experiment because they are not public.

OpenAI says it cannot communicate information about neural network size, data points, and training for competitive reasons. However, this brings with it a number of problems that have to do with the centralization of power, rather than moving toward democratizing research. Although OpenAI claims to have spent six months making GPT-4 safer, there is no way to verify and replicate any of these results. One wonders what the scientific value is of claims that cannot be verified. Assessing the safety of a model is not a binary action (safe/unsafe), but a process during which one must observe the model at 360', understanding emerging features and abilities. Only then can relationships between data points-model size-training and outcomes be imputed. In the society-AI relationship, one must always keep in mind that: new capabilities always bring a new set of new complications. Assessing the societal implications of AI models should be a societal issue, precisely, and not a private one.

Anthropic in Predictability and Surprise in Large Generative Models proposes some possible solution directions that research groups and companies should follow in order to continue AI development in a responsible way.

  1. Reduce asymmetries between the private sector and academia
  2. “Red teaming” of models: i.e., developing ways to explore model inputs and outputs with the aim of predicting damage before distribution
  3. Specialized government structures and government (legislative) interventions.
  4. Improve the tools available for model evaluation
  5. Improve understanding of emerging skills

--

--

GEN AI

Hi I'm Sofia and this is GEN AI the newsletter where every Thursday I publish a generative AI themed issue !