A Hybrid Intelligence Pathway Towards Using AI in Your Organisation

Dominik Dellermann
Hybrid Intelligence®
4 min readFeb 28, 2019

--

Hybrid Intelligence — Powered by Human Intuition. Augmented by AI.

Recent years have shown a lot of progress in artificial intelligence (AI) applications in various fields such as online shopping, social media, intelligent assistants, or autonomous driving. These advances made researchers and AI enthusiasts optimistic to reach artificial general intelligence (AGI) in the near future. One of the implicit promises that underlie these advancements is that one-day machines will be capable to perform complex human tasks or may even supersede humans in performing these tasks. This heats up new debates of when machines will ultimately replace humans in complex decision making. While AI performs incredibly well in some very closely defined tasks such as playing chess or go, however, AGI remains a long road ahead.

One of the dominant approaches to achieve AGI is trying to gain insights into the human mind and the process of how it understands the world by learning complex concepts, language, or causality. Josh Tenenbaum, Brendan Lake and colleagues, for instance, rely on the reverse-engineering of human learning to create machines that “learn and think like people” (Lake et al. 2017), thus, achieving the ability to accomplish complex goals (i.e. intelligence) (Tegmark 2017).

While this approach tries to replicate the human mind, it does not provide any strategies for ensuring that super-intelligent machines develop human-friendly goals, which is a central topic in research on AI safety (e.g. Bostrom 2014, Tegmark 2017). Moreover, the inferences of machines might become not understandable by humans, which although they might achieve super-human performance, is crucial for critical decisions that need to be aligned with human goals and interpretable to them.

For this reason, we propose another approach towards building intelligent machines inspired by Alan Turing´s early idea of what he called “Child Machines” (Turing 1950). What if we as humans could teach machines from their experience and implicit knowledge like parents teach their children or experienced physicians teach their students in learning complex concepts such as intuition, moral or creativity? Or even better: what if we could use the AIs inferences to vice versa educate the human teacher to become smarter? Like DeepMind´s Alpha Go that developed innovative strategies to play Atari Games or novel strategies of playing the game of Go, which led to a better human understanding of the game.

We call this approach Hybrid Intelligence”. Socio-technological systems that combine human and machine intelligence to collectively achieve superior results and continuously improve by learning from each other. The aim of this hybrid approach towards AGI is intentionally using humans and machines to learn from each other through various mechanisms such as labeling, demonstrating, teaching adversarial moves, criticizing, rewarding and so on. This will allow us to augment both the human mind and the AI and extend applications when men and machines can learn from each other to solve much more complex tasks than games. For instance, strategic decision making, managerial, political or military decisions, science and even AI development leading to AI reproducing itself in the future (Dellermann et al. 2019).

Such a hybrid approach provides various advantages for humans in the era of AI. First, as previously indicated humans can learn from AIs and make better predictions. Second, the human teaching approach allows controlling the learning process, by ensuring that the AI makes inferences based on human interpretable criteria. A fact that is crucial for AI adoption in many real-world applications. Third, hybrid intelligence allows customizing AI to its users by learning preferences or decision strategies and patterns. Fourth, teaching machines allows building more robust and adaptive models that can emerge as human concepts shift or new criteria emerge over time. Finally, hybrid intelligence provides a promising step towards safe AI as humans can teach and implement goals that are human-friendly and ethically.

Obviously, there is a long road ahead of us to make hybrid intelligence feasible. Challenges such as educating humans, creating trust, making machine teaching techniques accessible for non-software engineers, and deciding which are human goals that should be taught to machines to prevent opportunistic behavior need to be solved. However, we are confident that the future will not be men vs machine but hybrid approaches of humans and AI collaborating to create a better future.

Want to see how Hybrid Intelligence works in action? Visit us at www.vencortex.com

References

Bostrom, N. (2017). Superintelligence. Dunod.

Dellermann, D., Ebel, P., Söllner, M., & Leimeister, J. M. (2019). Hybrid intelligence. Business & Information Systems Engineering, 61(5), 637–643.

Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, pp. 1–25.

Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.

Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, pp. 433–460.

--

--