Web 3.0 has fatal flaws: Here is how to fix them.

Knowledgecoin
Knowledgecoin.io
Published in
11 min readSep 13, 2022
Evolution of the Web

The Future is Web 3.0

The wizards of technology tell us that Web 3.0 is the next step in the evolution of the internet.

This Brave New World will be decentralized, provide anonymous trust, and allow a permissionless method of accessing data.

Even more remarkably, AI (Artificial Intelligence) and Machine Learning programs will prowl the web extracting, not just information, but meaning from semantic web pages.

The Problem

Problem 1: Lack of required meaning

But when technical programmers include the finding of meaning into their designs, they cross over into the realm of philosophy: What do words mean? Who is to say? How do we know, what we know?

Moreover, we live in a cultural moment in which incredibly polarized ideological, political, and socioeconomic forces are in power struggles over the very meaning and proper use of terms like man, woman, baby, racism, and violence.

Even if one could make the web, “machine readable and machine understandable”, how could one extract reliably impartial, unbiased, or objective meaning from human generated internet content?

This concern is doubly true when the same words are being used to mean radically different things.

Problem 2: Meaning does not equate to Knowledge

Once one has established the meaning of a Semantic Web page, this is certainly progress of a sort. But the very next step will be to validate which meaning is actually true.

For instance, if one semantic web page contains the phrase, “Elvis was killed in New York City” and another says, “Elvis was not killed in the Big Apple”, it is useful to understand that they are both talking about New York City.

But Elvis was not killed in New York City. So although we understood the intended meaning of both pages we still failed to attain useful, validated knowledge.

What is required is to build the decentralized validation of knowledge right into the very fabric of Web 3.0. The right protocol would provide transparency into all definitions, logic, and evidence to let users confirm all meaning and knowledge for themselves.

Problem 3: Lack of required decentralization

The stated premise of Web 3.0 is to fix the issues of a centralized Web 2.0 that is controlled by a handful of gatekeepers.

It would be a total failure of its core mission if Web 3.0 were to deliver on some new functionality but critical pieces of infrastructure ended up being controlled by only a few centralized parties.

Not only does the current Semantic Web design seem insufficient to achieve the full goal of finding meaning, but critical pieces of the Web 3.0 design seem to be left in the hands of centralized parties such as universities or industry coalitions.

Twitter’s Jack Dorsey raised just such a concern.

Remember, the Semantic Web will help to adjudicate what is meaning and therefore what is truth. Any serious Web 3.0 effort must be built in a decentralized way from the ground up.

Thus, a decentralized (blockchain) method to implement and maintain a Semantic Web is critical to the success of Web 3.0.

Problem 4: The feedback loop for Artificial Intelligence is going to be slow

A problem with most Web 3.0 plans is that the Semantics, Logic Rules and Artificial Intelligence are very loosely coupled.

In the current designs, it is completely unclear how the AI will instantly adjust to feedback when humans discover flaws in the semantics or logical rule sets.

All things considered equal, if one provides instant feedback to an AI process, it will be able to rapidly correct its errors and quickly scale in power for its given task.

On the other hand, if one has a cumbersome and inefficient feedback process, the power of the AI will be greatly slowed and reduced.

What is required is a protocol to elegantly scale up decentralized human involvement when AI models require immediate insight and scale down the humans when the AI Models are able to take back that task.

The Flaws in Web 3.0

These architectural and philosophical gaps comprise a set of glaring flaws that threatens to undermine much of the value of Web 3.0.

Semantic Web is a single point of failure

Just one look at the light blue “Semantic Web” circle in the provided diagram should be enough to raise some alarm bells. In order for Web 3.0 to be viable, a high-functioning Semantic Web is an absolute requirement.

Indeed, you don’t need to be a Web 3.0 architect to realize that the Semantic Web is a single point of failure!

When we add in the worry of a cumbersome AI feedback loop, these factors create serious concerns for the achievement of Web 3.0’s lofty goals.

As it turns out, having programmers create philosophy can be just as dangerous as having philosophers code blockchain.

What is needed is a fusion of the two.

How it is supposed to work: Semantic Coding and the Semantic Web

The semantic coding feature was actually the earliest feature identified as what would constitute what we began to call “Web 3.0” or “the Semantic Web”.

While these terms are not synonymous., most people still mean some form of the Semantic Web when they refer to Web 3.0.

Semantic coding is a very expensive, complex, and time-consuming project. Given the importance of semantic coding, particularly the dangers it presents, we need to examine semantic coding in detail.

It may be easiest to understand what semantic coding is by first seeing what it is not.

John Searle, a noted philosopher of mind and language, brought this out in a powerful thought experiment he devised to argue against the possibility of “strong AI”, the idea that an artificial intelligence could genuinely understand anything.

Thought experiment: the “Chinese Room”.

Imagine a man inside a kiosk who speaks no Chinese, who has a very large list of Chinese symbols in two columns, one labeled “input” and the other “output”.

His instructions are to receive input cards through a slot from Chinese speakers outside the kiosk, who write questions on the cards in Chinese and submit them through the kiosk slot. He is to search the input list for a match, and then output the Chinese symbols listed in the adjacent output column back through the slot, as answers to the inputted questions.

Searle argued that if the two lists were sufficiently well developed, users of the kiosk would erroneously think the kiosk person understands Chinese. But he clearly does not.

Searle concluded that “strong AI” — artificial intelligence matching the understanding of the human mind — is impossible.

It is only the complexly programmed functionality of our instructions ­– like the two-column list of Chinese inputs and outputs and their matching operation — that appears intelligent.

We need not enter the “strong AI” debate here.

The point is that Web 1.0 and 2.0 function like the Chinese Room, and thus their algorithms take our search terms and quickly return pattern-matched search results — but do so without any actual understanding of those terms.

Semantic coding, however, categorizes every bit of information in the network, placing every informational item into a taxonomy of conceptual categories, such as letter, word, noun, verb, preposition, adjective, punctuation, sentence, etc.

To take a simple example, a word like “man” in Web 1.0 is just a string of three otherwise meaningless characters, “m”, “a”, and “n”, in that sequence.

In the Semantic Web, however, the system will need to encode “man” with all of its multiple layers of meaning elements, such as adult male homo sapiens.

Most programmers stop here and tell themselves, “Semantic Web Achieved. Job well done.”

But this first Semantic step is merely the tip of the iceberg of what is actually required. Each of these terms require additional definitions to be specified in the system.

For example:

  1. homo sapiens is coded as a specific primate,
  2. primate as a mammal,
  3. mammal as a warm-blooded vertebrate,
  4. vertebrate as a spined organism, etc.

This process continues for every term in the conceptual structure extending into every meaning element in the noun, man.

We must do this for every term in the language, and eventually, in every language.

A full tree of definitions is only the beginning

In addition to full definitions, the Semantic Web must encode all rules of well-formed meaning combinations — grammar and syntax — that will allow the AI and Machine Learning to evaluate claims, such as “Rick is a man”.

The Semantic Web must be sufficient for our AI to interpret complex sets of claims, such as arguments, and break them down into their atomic meaning elements, and vice versa.

This process creates a meaning taxonomy of increasing complexity, accounting theoretically for every atom of conceptual and logical meaning, and all their logically possible meaning combinations, essentially informing Web 3.0 of the meaning of everything.

Put another way, the Semantic Web must enable the metaphorical Chinese Room to understand everything we put into it, everything it contains, and everything it outputs.

Unlike the Chinese Room, Web 3.0 should be able to understand and be understood.

Most Web 3.0 plans underestimate the heavy philosophical work ahead

The current Web 3.0 plans do take into account a strategy for an “ontology” of concepts and a “framework of logic rules” that will need to be applied. (An “ontology” is a set of terms in a subject area the defines the relations between those terms.)

Semantic Web Technologies

But the viability of this plan lies in the seamless integration of the Ontology (which is being standardized) and the Logical Framework and Rules (which are still experimental). Also unclear is how we are going to adjudicate and pivot the ontology at scale.

Indeed, where optimizing the ontology and logic of Web 3.0 will require the nimble adjustments of a hummingbird, we appear to be creating a system with all the agility of a battleship.

In short, the data structures that will underlie a fully Semantic Web have yet to be fully created and proven to be agile.

In addition, no elegant solution exists to provide the constant training and feedback to the AI that will drive the majority of semantic and logic rules.

Thus, our requirement is...

  1. Find an elegant way to fill the gaps that AI and Machine Learning can not yet fulfill as well as efficiently train and coach them to improve over time.
  2. Find a way to bring decentralized meaning to the blockchain.
  3. Fuse Philosophy and decentralized Blockchain into an alloy that is stronger than either alone.

Solution: Web 3.0 must possess the inherent ability to validate knowledge at scale.

The Knowledgecoin protocol is our blockchain solution to this seemingly intractable set of problems.

The protocol is designed to crowd-source critical analysis and validate knowledge on the blockchain in a decentralized way.

The Knowledgecoin ecosystem has the unique quality that all concepts, logic, reasoning, and context can be examined and self-verified by every member of the community.

Knowledgecoin helps to filter out overloaded concepts, faulty logic and reasoning, context-dropping, and other forms of misinformation and deception.

In the Knowledgecoin network:

  1. Concepts, Logic and Claims are subject to critical analysis in a reliable and secure manner.
  2. Cryptographic proofs guarantee adjudication is reliable and data remains available and unchanged over time.
  3. The Knowledgecoin network achieves staggering economies of scale by allowing anyone to participate as a critical analysis provider and monetize their available time to engage in reasoning.
  4. Can be used where AI or Machine Learning fails to elegantly fill the gaps, train the machine algorithms to become more powerful, and act as a human check against machine driven conclusions.

In short, the Knowledgecoin protocol provides a decentralized knowledge creation, validation, storage, and retrieval service ecosystem without the need for centralized coordination.

Validated Knowledge fixes the issues with Web 3.0

Decentralized Concepts (DeBabel)

First and foremost, for purposes of solving the problem of Web 3.0, is a decentralized “Universal Concept Translator” that will extract reliably impartial, objective meaning out of all language.

This meaning must be extracted while including, preserving, categorizing, and translating all user populations’ dialects and sub-dialects, so users using different terms to mean the same thing or the same terms to mean different things can transparently communicate with and understand each other fully.

Knowledgecoin.io offers just such a, “Universal Concept Translator” which we call, “DeBabel”.

DeBabel is a semantic coding system that enables users everywhere to access, analyze, translate, and encode meanings which are confirmed via a decentralized jury process.

Decentralized Logic (DeLogic)

Knowledgecoin next processes claims through DeLogic, its decentralized logic engine, which validates the grammar, syntax, and logic after content moves through DeBabel.

It is this rule encoding of well-formed meaning combinations — grammar and syntax — that will allow AI and Machine Learning to evaluate claims, such as our previous example, “Rick is a man”.

This jury process breaks information down to its smallest atomic elements and then recombines it via its transparent algorithms to build the conceptual taxonomies and rule sets that will constitute an evolving decentralized Semantic Web.

Decentralized Reason (DeReason)

Output that makes it through then passes into DeReason, its decentralized context-checking layer, which runs surviving claims through a decentralized analytic filtering process that checks for context dropping and related issues.

Decentralized Knowledge (DeKnow)

Surviving claims then pass into DeKnow, a decentralized knowledge validation process, which checks the claims that survive DeLogic and DeReason against its knowledgebase for consistency with established facts.

Decentralized Truth (DeTruth)

Whatever passes all these filters enters DeTruth, which serves as its decentralized blockchain-backed repository of fully validated truth claims.

DeTruth promises to be not only better than the Chinese Room, but also the intelligent, unbiased, transparent, trustless, permissionless, ubiquitous, universally accessible, immutable library of decentralized validated knowledge that understands the meaning of every bit of its information and of every user’s search requests.

Conclusion

Web 3.0 deserves all of the excitement and attention it receives. We are all excited about the potential advancement that comes from a proper union of blockchain, the Internet of Things (Iot) and Artificial Intelligence.

But it is important to temper our excitement with one sober realization. In order for Web 3.0 to meet even the most mild of expectations, a viable, high-functioning Semantic Web is an absolute requirement.

What is required is a philosophically sound blockchain protocol that allows us to crowd source critical analysis and find both meaning and knowledge.

It is only through such a process that we can hope to reliably train AI and Machine Learning to achieve the levels of scale where profound insights can occur.

We humbly put forward the Knowledgecoin blockchain protocol as an elegant answer to this thorny problem.

Web 3.0 represents a profound chance for humanity to wrest back knowledge from the grip of the powerful.

Unlike the hollow promises of Silicon Valley, the Knowledgecoin vision represents a true decentralized utility that offers immutable, validated meaning and knowledge available to all people.

About the Authors:

Rick Repetti: Professor of Philosophy at CUNY, Vice President at the American Philosophical Practitioners Association (APPA), and Chief Philosophy Officer at Knowledgecoin.io.

Mark Gleason: Mark Gleason is a Chief Enterprise Architect, Venture Capitalist, and Board Member at Knowledgecoin.io.

--

--