MindxAI: A New Strategy for the Age of AI

Live With AI
Sep 14, 2018 · 9 min read

Can we become more human in the Age of AI? What would such a strategy look like?

Is it to prepare for the long term possibilities and perils should the singularity come to pass? Many experts speculate the future could be one of abundance and equity across economy and society. Many others believe it could be one of societal struggles and splinters as people and much of what they do are overtaken by AI advances that “devour human intelligence”. [1]

The jury is still out on which scenario will prevail. As spelt out in What is Artificial Intelligence: Villani Mission on Artificial Intelligence — March 2018 [2], such “speculative topics” are “a tricky matter”. Business and government leaders will thus find it hard to be definitive about the strategies needed for these different scenarios about the Age of AI.

Human Cognitive Capacities: A Concrete Immediate Long-Term Threat

But there is one area where business and government leaders can be clear about the strategies needed: combating the negative impact of AI and IT on our human cognitive capacities.

AI “seeks to understand how human cognition works by creating cognitive processes that emulate those of human beings.”[3] AI has thus improved the speed, accuracy and efficiency of how people think and work. Its prolonged use however also weakens our human cognitive abilities over time. By automating our cognitive tasks such as problem-solving and decision-making, we are reducing our own or unassisted ability to “translate information into knowledge and knowledge into know-how”.[4]

In our recently published book Living Digital 2040: Future of Work, Education and Healthcare (the outcome of a Singapore National Research Foundation- and Ministry of National Development-funded Future of Cities project at the Lee Kuan Yew Centre for Innovative Cities/Singapore University of Technology and Design), we found emerging and empirical evidence about this threat of technologies to our human cognitive capabilities.

This evidence is concrete, growing, and spans a spectrum of capabilities (see Table 1 below):

Table 1 — Increasing Evidence of How AI and IT Affect Human Cognitive Capabilities and the Potential Consequences [5]

As Table 1 clearly shows, the negative impact on our human cognitive capacities has broader implications on our organizations (e.g. corporate governance) and society (e.g. health). They affect our lives, our work and social institutions. And the consequences are both immediate and long term. The short term insidious diminishing of our capacities can lead to long term serious ramifications on people, companies, and cities.[6]

We need to tackle the threat today to safeguard our tomorrow.

A New Strategy: MindxAI

To do so, we cannot rely on what we have always done. To combat this new incipient threat arising in the Age of AI, we need new strategies. In our follow-up research work[7] since Living Digital 2040: Future of Work, Education and Healthcare, we propose one such strategy: MindxAI.

MindxAI is a strategy of designing and re-designing technologies and innovations to protect the human and organizational cognitive capacities that matter to us. That can be a trait that distinguishes us from machines (such as the human touch that only a caring teacher or a caregiver can give), or a core competency that gives a company its competitive advantage, or an integrated policy cum socio-technical system that makes a city smart. Business and government leaders will have to make hard decisions about what makes their citizens, workforce, companies, cities and countries special, and use AI in ways that strengthen that.

This MindxAI strategy has three guiding principles:

Case Study — GPS 2.0: MindxAI Strategy in Action

To understand how the MindxAI strategy works, we developed GPS 2.0 — an artefact-from-the-future which helps us see what future everyday products might look like so that we can better appreciate how social, economic and technological trends affect the way we live [8] — to illustrate the three guiding principles (see Figure 2 below).

1. Re-assess

The conventional GPS provides turn-by-turn instructions, which a driver can follow almost mindlessly. As highlighted earlier, regular reliance on this for navigation reduces hippocampus activity with potential long term risks of Alzheimer’s.

2. Re-design

GPS 2.0 re-imagines what an improved MindxAI GPS might look like:

  • Instead of turn-by-turn instructions, drivers are given visual and spatial cues e.g. turn right at the red building, or turn left at the Ferris wheel after the flyover, thus protecting the driver’s cognitive and spatial sense of the city
  • The driver has the option to toggle between GPS 2.0 and the conventional GPS, in the event he is in a rush and needs quick turn-by-turn instructions

3. Revise

Analytics and gamification are provided to help the driver see if his navigation is improving over time, together with encouragement to continue use of this mode of interaction

Conclusion

GPS 2.0 might be a simple example, but it clearly shows how business and government leaders can use the MindxAI strategy to combat the negative impact of AI and IT on our cognitive capacities.

The strategy is also timely. As AI for Humanity — the French Strategy for Artificial Intelligence points out, we must start

“…looking into the complementarity between humans and artificial intelligence… [and] it is vital to find a complementarity set-up that does not alienate [people] but instead allows for the development of truly human capabilities.” [9]

The MindxAI strategy does so by compelling business and government leaders to first decide what human cognitive capacities matter to us and should be protected. Only after that do we develop the AI solutions that strengthen those capabilities while reaping the benefits of technological efficiency and productivity for companies and cities.

MindxAI ensures that even as AI continues to replace, relegate, and even devour many human tasks, our ultimate destination is NOT one where all human tasks are taken away, but one where only those that should be taken away, are taken away. As Antoine de Saint-Exupéry wrote in Wind, Sand, and Stars,

“In anything at all, perfection is finally attained not when there is no longer anything to add, but when there is no longer anything to take away…”

Seen through that lens, MindxAI then is much more than a strategy to protect our human cognitive capacities. It is also a profound hope — that by strengthening our human cognitive capacities, we are also strengthening what makes us human.

It is about becoming more human in the Age of AI.

POON King Wang and Hyowon LEE

About the authors

POON King Wang is the Director of the Lee Kuan Yew Centre for Innovative Cities at the Singapore University of Technology and Design (SUTD). Hyowon LEE is Assistant Professor at SUTD. This essay builds on ideas found in their recently published paper, co-authored with Gayathri BALASUBRAMANIAN, LIM Wee Kiat, and Aaron YONG (see footnote 7).

About Live With AI:

Live with AI is a non-profit foundation based in Singapore. The foundation gathers thought leaders, decision-makers and French, Singaporean, and international researchers to lead working groups and research projects on the positive impacts of artificial intelligence to our society. The Live with AI community takes advantage of a presence at the heart of the South-East Asia region and an access to several research laboratories to issue recommendations which can be immediately applied and tested among very diverse communities looking for technology disruption. Live with AI is an independent initiative created at the occasion of the France Singapore year of Innovation 2018.

[1] As highlighted by Pierre Robinet and Arno Pons in their Les Echos commentary “Live (better?) with Artificial Intelligence” published on 9th March 2018. See https://www.lesechos.fr/intelligence-artificielle/cercle-ia/0301377784302-lia-permettra-de-nous-adonner-aux-plaisirs-artistiques-2159834.php (English translation at: https://medium.com/@livewithai/live-better-with-artificial-intelligence-8a2fd88c341d).

[2] See https://www.aiforhumanity.fr/pdfs/MissionVillani_WhatisAI_ENG(1)VF.pdf.

[3] Ibid.

[4] Carr, N. (2014). The glass cage: Where automation is taking us. Random House.

[5] Table 1 summarizes findings from accounting, financial trading, way-finding/GPS/navigation, architectural practice, game playing, programming, reading, memory (and photography and online search), and spelling. References for these are found at the end of article.

[6] As above.

[7] Balasubramanian, G., Lee, H., Poon, K. W., Lim, W. K., & Yong, W. K. (2017, July). Towards Establishing Design Principles for Balancing Usability and Maintaining Cognitive Abilities. Book Chapter in: Design, User Experience and Usability: Theory, Methodology and Management. LNCS Vol. 10288 (19th International Conference on Human-Computer Interaction, 2017), p3–18.

[8] See https://livingdigital2040.com/2017/11/02/wip-feeling-the-future/

[9] https://www.aiforhumanity.fr/en/; https://www.aiforhumanity.fr/pdfs/MissionVillani_Summary_ENG.pdf

References (for Table 1, and Footnotes 5 and 6)

[1] Baxter, G., & Cartlidge, J. (2013, May). Flying by the seat of their pants: What can High Frequency Trading learn from aviation?. In Proceedings of the 3rd International Conference on Application and Theory of Automation in Command and Control Systems (pp. 56–65). ACM.

[2] Burgos, D., & van Nimwegen, C. (2009). Games-Based Learning, Destination Feedback and Adaptation: A Case Study of an Educational Planning Simulation. T. Connolly, M. Stansfield, & L. Boyle, Games-based learning advancement for multi-sensory human. techniques and effective practices.

[3] Burnett, G. E., & Lee, K. (2005). The effect of vehicle navigation systems on the formation of cognitive maps. In International Conference of Traffic and Transport Psychology.

[4] Caicco, G. (2007). Architecture, ethics, and the personhood of place. UPNE.

[5] Carr, N. (2008). Is Google making us stupid?. Yearbook of the National Society for the Study of Education, 107(2), 89–94.

[6] Carr, N. (2014). The glass cage: Where automation is taking us. Random House.

[7] Dowling, C., Leech, S. A., & Moroney, R. (2008). Audit support system design and the declarative knowledge of long-term users. Journal of Emerging Technologies in Accounting, 5(1), 99–108.

[8] Fenech, E. P., Drews, F. A., & Bakdash, J. Z. (2010, September). The effects of acoustic turn-by-turn navigation on wayfinding. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 54, №23, pp. 1926–1930). SAGE Publications.

[9] Froehlich, J., Findlater, L., Ostergren, M., Ramanathan, S., Peterson, J., Wragg, I., … & Landay, J. A. (2012, May). The design and evaluation of prototype eco-feedback displays for fixture-level water usage data. InProceedings of the SIGCHI conference on human factors in computing systems (pp. 2367–2376). ACM.

[10] Galletta, D. F., Durcikova, A., Everard, A., & Jones, B. M. (2005). Does spell-checking software need a warning label?. Communications of the ACM,48(7), 82–86.

[11] Grimes, A., Bednar, M., Bolter, J. D., & Grinter, R. E. (2008, November). EatWell: sharing nutrition-related memories in a low-income community. In Proceedings of the 2008 ACM conference on Computer supported cooperative work (pp. 87–96). ACM.

[12] Haldar, V. (2013, November 10). Sharp tools, dull minds. Retrieved October 19, 2016, from http://blog.vivekhaldar.com/post/66660163006/sharp-tools-dull-minds

[13] Henkel, L. A. (2014). Point-and-shoot memories the influence of taking photos on memory for a museum tour. Psychological science, 25(2), 396–402.

[14] Maguire, E. A., Gadian, D. G., Johnsrude, I. S., Good, C. D., Ashburner, J., Frackowiak, R. S., & Frith, C. D. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences, 97(8), 4398–4403.

[15] Nielsen, J. (1994). Usability engineering. Elsevier.

[16] Oinas-Kukkonen, H., & Harjumaa, M. (2009). Persuasive systems design: Key issues, process model, and system features. Communications of the Association for Information Systems, 24(1), 28.

[17] Shneiderman, B. (2010). Designing the user interface: strategies for effective human-computer interaction. Pearson Education India.

[18] Sohn, M., & Lee, J. (2007, April). UP health: ubiquitously persuasive health promotion with an instant messaging system. In CHI’07 Extended Abstracts on Human Factors in Computing Systems (pp. 2663–2668). ACM.

[19] Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. science,333(6043), 776–778.

[20] van Nimwegen, C. (2008). The paradox of the guided user: assistance can be counter-effective. Utrecht University.

[21] van Nimwegen, C., & van Oostendorp, H. (2009). The questionable impact of an assisting interface on performance in transfer situations. International Journal of Industrial Ergonomics, 39(3), 501–508.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade