Opinion: the link between information science theory and artificial intelligence explains the recent scare

Islam Akef Ebeid
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
5 min readJun 15, 2023

Introduction

Objectively, artificial neural networks are just highly-engineered parametric non-linear multivariate functions. They are inspired by the neural network in the brain, where groups of large networks of neurons are connected. The learning process is also believed to be similar to what happens in the brain, at least in a tiny part. That learning process is based on optimizing an objective function and is equivalent to performing a complex random walk scheme on that network.

But there is more.

One of the goals of the artificial intelligence community is to unify and evolve computing from a mere tool used to crunch numbers into a reconstruction of the human ability to distill knowledge and wisdom from data and information. Providing computer knowledge-level capabilities is complex, as a need to represent real-world semantic concepts arises. Technological advances regarding data storage, computing power, and methods to create and represent knowledge on a machine were needed. The computer must understand concepts to bridge the gap between data and knowledge. By mining the instances of the concepts, i.e., the data and the information, a machine can reconstruct the knowledge. But we are at a point where we have advanced enough regarding data storage, computing power, and knowledge representation. Hence the role of research in artificial intelligence has transcended traditional needs.

Artificial intelligence research is at crossroads.

How do we or machines acquire knowledge according to information science theory?

In information science, the DIKW model [1] describes how a transformation happens from raw data to wisdom. Information is described in terms of data. And knowledge in terms of information. And wisdom in terms of knowledge. Knowledge mediates the process of extracting insights from data that would eventually transform into wisdom. Acquiring knowledge can be thought of as the individual internal process where the person receives information and stores it as knowledge through interpretation and perception.

Another way to think of the DIKW model is that it describes the process by which what is objective transforms into what is subjective, leading to wise decision-making [5]. In that model, data and information operate on a level where information, according to [2], can be categorized into four categories — first, information about something, such as a flight timetable at an airport. Second, information can also be as something, such as fingerprints and genomes. Third, information can be for something, such as instructions and manuals. Finally, information can be in something such as patterns and biological processes. Another way of defining the transformation into information, according to [2], is that it is the factual semantic content of well-formed data. And well-formation leads to meaning. The previous definition merges what is objective with what is subjective. It could be considered a good start to build some foundation for understanding knowledge acquisition, how learning happens, and eventually augmenting that understanding to artificial intelligence.

The problem

Information science focused more on the work of prominent researchers in information theory, such as Claude Shannon in [4]. That work saw information from a purely quantitative and objective perspective. Though the quantitative view on information is helpful in materializing information as systems aimed toward conveying knowledge and wisdom, it does not describe the process by which information is converted to knowledge subjectively on the receiver side.

In addition, researchers in artificial intelligence know very well that the field is too empirical. There is no proper theory. We test and tinker, and it works. There are no appropriate derivations nor reasoning behind some of the architectures we have today other than that they work empirically. That is not necessarily due to any lack of effort from researchers except that they have overlooked the theory. Marvin Minsky, for example, took a theoretical approach to knowledge representation and learning in artificial intelligence in work like The Society of Mind [3] in 1987. And came very close to having a solid foundation. Yet the artificial intelligence community has sidelined him for being too “armchair.” And empiricism took over, starting from the arbitrary yet efficient Multi-layer Perceptron.

The lack of theory and reason also applies to cognitive and neurosciences, which are supposed to tell us how we learn. Or the mechanics of how information turns into knowledge. Yet the lack of theory in cognitive sciences is less prevalent than in artificial intelligence. It is also no secret that information science theory and definitions should have played a more substantial role in redefining how we conceptualize learning. But information science theory is yet to be well-defined or understood. That is also not necessarily due to any lack of efforts made by information scholars but because the nature of information forces a lack of proper theory.

That could be why we don’t yet understand knowledge acquisition.

Information is hard to define because it is hard to explain information in terms of itself, whether in numbers, words, or images. That is why information science has turned towards that quantitative view. And that is why no one knows how this transformation from information to knowledge happens in the machine or the human brain. No one knows how information sources like encyclopedias, image datasets, and knowledge bases, which act as wells of collected data and facts presented and distilled as information, turn to knowledge upon receipt by the reader. Whether a machine or a human does that reading, all we know is that the transformation happens through interpretation and inference and eventually acting upon it in wisdom in the case of humans or highly specialized robots.

And this is where the danger lies.

The recent scare

In light of recent incidents in the artificial intelligence community, I am referring to the exit of Geoffrey Hinton from Google DeepMind, in addition to the several high-profile researchers sounding the alarm. Someone like Geoffrey Hinton realized we had reached a point where a computing system could distill knowledge. But here is the crucial part. I don’t think he knows how or why.

Despite his knowledge of how to make neural networks work very well. Perhaps too well. He deeply doesn’t know how the information flows in those networks. And nor does anyone. And that’s why he sounded the alarm. The skeptic reader might say that that was an overreaction. Because even though machines can distill knowledge, they do it on command. The problem with that argument is that the chasm between distilling knowledge and acting on that knowledge is small. That chasm is what is known as emergence. And once you reach a certain threshold of efficient unintelligent agents, emergence takes little time. The other problem is that if that chasm is breached, that’s to say, those agents don’t act on command anymore; you don’t know if the actions will be wise. Because we still need to learn how the transformation from information to knowledge happens.

We don’t know. And that’s the problem.

References

[1] Ackoff, R. L. (1989). From data to wisdom. Journal of Applied Systems Analysis, 16(1), 3–9.

[2] Floridi, L. (2005). Semantic Conceptions of Information. https://seop.illc.uva.nl/entries/information-semantic/

[3] Minsky, M. (1987, April). The society of mind. In The Personalist Forum (Vol. 3, №1, pp. 19–32). University of Illinois Press.

[4] Shannon, C. (1953). The lattice theory of information. Transactions of the IRE Professional Group on Information Theory, 1(1), 105–107. https://doi.org/10.1109/TIT.1953.1188572

[5] Zins, C. (2007). Conceptual approaches for defining data, information, and knowledge. Journal of the American Society for Information Science and Technology, 58(4), 479–493. https://doi.org/10.1002/asi.20508

Follow our Social Accounts- Facebook/Instagram/Linkedin/Twitter

Join AImonks Youtube Channel to get interesting videos.

--

--

Islam Akef Ebeid
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

observer. researcher. seeker of a simpler more content life. interests: hiking, activism, poetry, education, computers, graphs, data, ai, writing, jazz, aikido