Why we develop a meta-semantic for human language in AI
who else? Universal Grammar is a protocol to standardize the representation of speech-based user inputs for AIs. Building on Noam Chomsky´s theory of cognitive hard-wired Universal Grammar , we describe a universal accounting principle for human language and provide a possible roadmap to build an open source address namespace for Voice Internet.
Problem: The challenge of spoken language in AI
The problem is AI explainability: We can interpret the results of an AI reasoning. But we can not actually see how language is being processed. Today´s AIs are algorithmic black boxes and data silos.
Hypothetical there 500+ different ways in human language to tell an AI about your wish for pizza delivery:
“Hey Siri, order from Dominos — as usual!”
“Pepperoni, extra large, extra spicey and a coke”
“Huuuuuuungry, hey AI, order me a pizza”
But how should AI №1 tell AI №2 about this voice command?
Speaking is individually different, implicit and much more imperfect than written text or inputs by visual selection.
Questions about data formats are answered nowadays by the monopolies of GAFA: Probably you started to work with cross-platform codebases to set up custom Alexa, Hey Google or Messenger skill (intents, bots, ..).
Or you started running some kind of language sentiment analysis in your customer care departments to “listen in”, if customer sound especially upset via hotline or email support requests.
This is relatively trivial in the case of a pizza order. But the situation changes once users want e.g. to file official government documents only by their voice.
How will consumers later verify and prove the contents of their voice inputs? Do we want AIs to keep sound records of everything?
Our voice and the way we speak is one of the most personally identifiable information available. Similar to hand-writing, speaking is like an individual fingerprint. Voice inputs can be analyzed for markers about e.g. stress, physical and psychological conditions.
How will this information be exposed? Supposing everything will be an AI sooner or later, will users be somehow able to control which AI can access what kind of information?
Our conclusion is that Voice Internet needs a protocol. Similar to Hypertext for the original Internet, we need a new markup schema to make speech-based user commands interoperable across ecosystems and standardized processable for AIs and humans.
Goal: Make Siri and Alexa talk with each other
who else? Universal Grammar is a data format that makes NLU interpretation processes transparent explainable and privacy by design.
We suggest that AIs start “to speak with each other” by reducing voice-based user inputs to a grammar syntax consisting only of “who else?” relationships:
“Hey Siri, find me a somebody to go out with!”
(User 1) (Date) (who else?)
“AI, we need a new studio to rent”
(User 1) (Apartment) (who else?)
“Alexa, get us a ride to the city as fast as possible”
(User 1) (Taxi) (who else?)
The reason is “the Turing completeness of semantics” in human language: “Date, Apartment, Taxi (..) who else?” is apparently the shortest, yet explicit correct enough way in human language to represent the intent of “finding somebody who else is into XYZ”.
The reason “who else?” questions exhibit this universal adaptability in language can be described by Noam Chomsky´s Universal Grammar (UG). Chomsky suggests with UG that certain lingual concepts appear to be more constant “hard-wired” in human language than others. A language that is self-explanatory to the human mind.
The last proof of Universal Grammar was 2015 awarded with an alternative Nobel Prize. Max Planck linguists demonstrated that “Huh” is a universal word across 31 different languages. 
who else? is inspired by the idea of using hard-wired semantics. After we successfully demonstrated that “who else?” (wer noch? quién más? ..) grammars appear to likely exist across languages, we started to experiment with ideas to outline a universal language for the representation of voice commands in AI applications.
Concept: who else? is a Universal Grammar for AI
The idea of who else? Universal Grammar is to establish a protocol for human language in AI by simplifying how voice-based user inputs are stored and correlated by who else? relationships in human language.
We do not want to replace NLUs. Instead, we offer an open-source format to represent human language interpretation across different AI ecosystems.
Our research showed that every kind of language-based intent information can be summarized by reducing them to who else? relationships.
Encoding by who else? as grammar model, reduces ephemeral information and unnecessary context in human language.
who else? Universal Grammar reduces bias, slang, syntax (..) of language by a simplified accounting principle. Essentially we break voice inputs down to the smallest information order available.
We suggest creating a unified address system for Voice Internet, the “who else?” namespace:
The complexity of protocol components will be extended over time to include e.g. logistics, smart city, and e-mobility applications.
By standardizing the representation of speech-based user commands, Voice Internet services become multi-language interoperable by default.
People do not need to talk with AIs by asking “who else?” questions. However, we think who else? questions are indeed the most efficient way of querying an AI which features you do not know yet.
Usability of voice-based user interfaces is difficult: AIs speak, but hardly good enough to be useful. Across all user demographics, people on average recall less than 3,7 smartphone apps.
If users at least would know, how an AI potentially could answer, they could adjust their input vocabulary accordingly.
As markup language for human intent, as well as interface paradigm for voice-based interfaces, who else? is designed to be accessible for users of all ages. Like children's language, who else? makes AIs usable in simple words.
Roadmap: A standard for explainable AI & compression format
Similar to MP3 or Hypertext, we believe in the value of an open standard, potentially connecting AI ecosystems, IoT interfaces, and human users.
Now, DIN, the German public standard-setting agency, awarded us with a research grant to develop a norm for language explainability in artificial intelligence.
The scope of this project is to develop a standardized catalog of who else? commands, enabling interoperability between NLU ecosystems.
Our goal is to connect with research, industry and public partners in the development of this technology — a scalable, universal language must be a community-driven project.
Our current whitepaper draft is available at http://start.whoelse.ai
Say hello at firstname.lastname@example.org