Cognitive AI: how bots will acquire their inner human touch

Interview with Futuremaker: Nikolas Kairinos

Nikolas Kairinos: Founder of Fountech Ventures
Artificial General Intelligence will have to acquire human attributes like intuition, reasoning, and emotion.

Nikolas Kairinos is the founder of Fountech Ventures; he spends most of his time in his primary role as the CEO and founder of He is a pioneer and thought leader in the incubation and funding of early-stage cognitive AI ventures. Additionally, he is an advisor to companies seeking to build in-house cognitive AI capabilities or are looking to craft cognitive AI solutions. Nikolas nis a polymath who draws on knowledge from neuroscience, linguistics, philosophy, biology, statistics, physics, mathematics, and the arts in his endeavor to learn to instill human traits into machines.

Nikolas, a veteran of the AI industry, weathered its winter in the 1980s before riding the wave of its resurgence in the 2000s, and thereafter. His sweet spot is the frontiers of cognitive AI that contribute to human-like attributes such as reasoning, and emotion.

Mr. Kairinos spent most of his time programming and researching even as a boy, and his friends were older boys with the same interests. He enrolled in a university, with the intention of earning a doctoral degree in computer science and AI, but was disillusioned by the formal curriculum with little immersion in AI. In some cases, he was already familiar with the subject matter. He, therefore, charted his pathway and taught himself to construct models, methodologies, processes, and algorithms that helped him to accrue expertise in cognitive AI.

Nikolas pursued other careers in investment banking and real estate through the AI winter while his zeal for his avocation — AI — never flagged. He saw his opportunity coming during the Internet boom when he realized that the millions of interconnected devices would be sources of data that could be measured and analyzed conveniently. It was at this time that Nikolas, who grew up in Cyprus and South Africa, moved to the USA where he applied AI to digital marketing, and spent most of his time researching and on consulting engagements with companies.

A chance meeting with Salvatore Minetti in 2016, who at the time was seeking expertise in AI for a sales lead generation venture that was later named Prospex. The two of them instantly discovered they had great professional chemistry, and Salvatore invited him to be the co-founder and CTO of Prospex. Nikolas used graph theory and reinforcement learning to cluster people for various contexts as attested by higher conversions into sales. Prospex is now in the portfolio of Fountech Ventures.

Mr. Minetti and Mr. Kairinos decided to set up Fountech Ventures for investment in deep-tech AI ventures and their incubation from the time an entrepreneur has a breakthrough idea. Nikolas decides whether the idea is worthy of investment from an AI and technical perspective.

Currently, Nikolas invests much of his time on Soffos (Greek for a very wise person).ai, an education venture. He is most excited by the idea of using AI to increase engagement in learning. automates the processes of learning and teaching and delivers knowledge that is customized for each person.

Kishore Jethanandani, the Editor of FuturistLens Magazine, had a wide-ranging interview with him around topics related to cognitive AI and human attributes of bots. In the first part of the interview, we focused on the rationale for the interdependence and interaction of man and machines. The second part will discuss the processes, data processing, and technologies of dynamic collaboration between man and machines.

FuturistLens: Does the trajectory of AGI (Artificial General Intelligence) or the Turing test help to predict the evolution of cognitive AI technology and corresponding human attributes that machines acquire?

NK: AGI has a long way to go. It will have to construct multiple layers of data and knowledge to achieve its objectives and will have to generalize knowledge across domains. It will also have to acquire human attributes like intuition, reasoning, and emotion.

A programmatic approach to imbuing uniquely human faculties into machines is never going to happen. Many in the AI profession will disagree with me on this issue. Instead, we will have to find a way by which humans and machines can collaborate — something they can do best by quickly communicating with each other by mechanisms such as a brain-machine interface. As such, humans will remain an integral part of the loop even as we make progress towards AGI.

Humans and Machines can communicate and collaborate by using mechanisms such as brain-machine interface

Regarding the Turing test, we, as humans, are prone to the idée fixe that sees machines in our image. I call this the anthropomorphic imperative — the urge to have a humanoid AI. AI must, therefore, comprise conversational agents, which will enable humans to interact and communicate with machines in the same way they would with their peers.

FuturistLens: You do have a project (the name is under wraps), which does automatic reasoninga characteristic unique to humans. How you square with your comments about AGI or symbolic AI as it is sometimes called?

NK: Symbolic AI started losing popularity in the 1980s when I got into the field of AI. I see parts of it incorporated in Cognitive AI work; the enabling technology for it is the advances in graph theory in knowledge and data representation.

About machines’ capability to reason, it incorporates philosophy and methods of classification — epistemology, ontology, meta-ontology — into cognitive AI using advanced knowledge graphs. It responds to questions such as what is truth, what is knowledge, and how do you establish ground rules for truth.

Epistemology has three branches — Agnotology, Alethiology, and formal epistemology. Agnotology is the study of ignorance or doubt, and Alethiology of the nature of truth. Computational epistemology, a subdiscipline of epistemology, uses logic, probability theory, and computability theory to elucidate traditional epistemic problems and instills attributes resembling the human brain. A complex process of modeling, using a variety of methods, goes into establishing what truth is. Ontology and meta-ontology classify information and more.

Our portfolio company that embeds reasoning in machines reads millions of articles on a subject and organizes it in a logical, navigable, and digestible ontology that constitutes the digital brain. You are emulating understanding while discarding the source material and extracting the knowledge represented by the knowledge graph.

What is truth, what is knowledge, and how do you establish ground rules for truth.

FuturistLens: Your approach to AI funding is to find innovative technologies that look at what is possible at the periphery of current practice, and then you look at the breakthroughs needed to go to the next level. Right now, reinforcement learning is surging at the frontiers of AI innovation. What do you learn from it that gives you cues about what is possible next?

Nikolas Kairinos: When we look for innovative technologies, we do not have a set criterion. We do look for future technologies based on an understanding of the evolution of the technology currently. Reinforcement learning is emerging as a leading trend, which is why we are eagerly exploring what it means for the future of AI.

Mainly, we look for entrepreneurs who set out to solve a problem that is difficult or impossible to solve with current AI technologies. They could, for instance, be innovating incrementally to solve a problem that is hard to do with existing technologies. So, if the existing technology is supervised learning, entrepreneurs may be creating ways to automatically and autonomously label data whereby human supervision is not required to oversee the process.

The foundational technology that has made pivotal changes to AI is deep learning and its derivatives. Deep Reinforcement Learning, the technology underpinning AlphaGo, AlphaZero, etc., starts with sparse predicates that yield results that are, at best, in the ballpark. The deep reinforcement learning models become progressively more precise and could hit their mark as they perfect their estimates based on continuous feedback.

The model is reinforced either negatively, when the estimates fall short, or positively when the results meet expectations. Reinforcement is pivotal to any AI system. Just like any scientific process, algorithms test hypotheses with data and tweak them when the results fall short. The more granular is the data, the more specific is the feedback and greater the opportunities for learning.

Reinforcement is pivotal to any AI system — algorithms test hypotheses with data and tweak them when the results fall short

Kishore Jethanandani is a futurist, economist nut, innovation buff, a business technology writer, and an entrepreneur in the wearable and IOT space.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store