Future Life — article 3/4
This series of essays is about the impact of technologies on future life, in particular the areas of employment, parenting, education and our social interactions. I discuss about proposed adaptions to cope with the increased pace of living, the underlying -yet often hidden- complexities of our connected world and the implications on our choices regarding skill development, lifelong learning and digital identity.
- 1 The Place of Information Security in the Age of Accelerations
- 2 A Brief Guide to Parenting your Children’s Digital Identity (and Future)
- 3 Computational Thinking, a Core Topic Starting Elementary School
- 4 The Illusion and Opportunity of Information Parity (to appear)
A Skill for the Future
In the world where most repetitive tasks are bound to be automated, it is useful to understand which core learning topics are important for the future. Computational Thinking is one of them.
There is generally an intense debate today around determining which skills are necessary for future workers, in response to humans’ need to adapt to the increasing pace of technology evolution. I discussed in a previous article of this series the various catalysts for societal changes, here I focus on one of the measures to cope with them.
Because the entire trajectory of the future of humanity is prophesied to lead to a few scenarios hinging between utopias -automation leads to abundance and we bask in leisure afforded by free time- and dystopias -humans fight for relevance in a world overtaken by superintelligent robots-, figuring out the right skills likely increases our chance to enable a more likely and positive outcome between these two extremes. For example one that allows us to build on top of automation and add value with what makes us distinctly human.
In a future society, a series of value-adding skills has been notably identified both by the World Economic Forum and the Institute for the Future in apparently two distinct efforts, and laid out in the table below (Institute for the Future).
A skill identified in both cases is Computational Thinking, which is defined by the Institute for the Future as the ability to translate vast amounts of data into abstract concepts and to understand data-based reasoning.
Interestingly this definition differs significantly from others that you can find online and in the litterature. For example, on Wikipedia it is presented as:
a set of problem-solving methods that involve expressing problems and their solutions in ways that a computer could also execute.
Matti Tedre and Peter J. Denning cover the subject in over 240 pages in their 2019 book Computational Thinking. Their approach is more thorough and really contrasts with a much-cited 2006 article by Jeanette Wing that may easily be misunderstood as a pledge to convert everyone into a computer specialist. In aggregate, throughout the book Tedre and Denning converge to a definition that can be summarized in my opinion as
the set of skills necessary to understand the nature of a problem so that it can be addressed using a computational approach.
I understand that such a definition may sound abstract at first, but it has the advantage to be simple enough such that the concept of computational thinking can be introduced early in education, in particular because it does not immediately relate to machines or requires a computer science education to be understood.
So I first explain it in a context devoid of the familiar machines that surround us every day.
Start Without a Computer
It is important in my opinion not to immediately conflate “computational approach” and the use of automatic computers, i.e. all the laptops, tablets and smartphones around us. This is because the approach should be open with respect to the realization of the “computation” in the first sense of the word: an act or the process of calculating something. This is likely the purpose of the word “also” in Wikipedia’s definition above.
Many definitions promptly suggest the use of a computer (Wikipedia, BBC Bitesize). This has led the proponents to this core topic to be mistaken as computer chauvinists. In their book Tedre and Denning explain that computational approaches date from way before silicon-made computers existed. Take for example the methods proposed by Euclid to find the greatest common divisor or the sieves of Eratosthenes to identify prime numbers.
The execution of these methods eventually lead to the concept of algorithms that are used as a functioning blueprint for computers. Starting without a computer allows us to introduce concepts of algorithms without the burden of introducing a computer language (the dialect to express an algorithm) or formalizing expressions such as pseudo-code. This makes the concept of computation (and automation) more digestible to young children.
Another risk when introducing Computational Thinking is to provide an description that is “all encompassing with computers” and forces into thinking like a computer scientist. This is the feeling that I get when I read the Janette Wing’ 2006 essay on the topic that I mentioned earlier. This essay was (appropriately) published in the magazine of the leading society of computer scientists. This feeling is shared by other scientists. See for example the blog post by Lorena Barba written when she visited the Berkeley Institute for Data Science.
In all, some descriptions might blur the boundaries of Computational Thinking and makes it difficult for most of us to “put a finger on the topic.” In effect, this has led critics to point to the topic as vague which in turn has likely slowed its appearance in school curriculums so far. Related to this issue and likely delayed as well are key questions such as: how do we measure students’ computational abilities? And is computational thinking good for everyone?
Therefore the discussion about finding the pithiest definition for computational thinking seems to be still raging on. However my opinion is that it requires a holistic approach such as the one taken by Tedre and Denning to be correctly included in a curriculum that spans education grades.
Make Fast Thinking More Computational
A legitimate question is why do we need to elevate Computational Thinking as a new core topic? The seeding reason is because the use of machines is now central to the development and support of society. Computational approaches have permeated all academic disciplines and machines bring an essential mechanism to their evolution. This is due to the increasing amount of data available from digitalization and the human drive to make sense of it.
For example in the book Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are by Seth Stephens-Davidowitz, historical texts are analyzed using computational approaches to determine when the concept of the United States as a nation started to appear in people’s consciousness.
More recently, a computational approach to behavioral economics has been leveraging information posted on social media platforms to decypher human purchase decisions, choices in politics and other critical mechanisms to a functioning society. Human behavior is in turn exploited to funnel disinformation which is likely the cause of many woes such as society polarization and trust erosion.
In general, computers have taken a significant place in most of our professional and personal lives. In Homo Deus, historian Yuval Harari explains that the human race is on the path to blend our biology with non-organic parts which will bring us closer to machines than ever before.
As a result, we need a core topic that enables us to think in this new setting such that we can still be the puppeteers in a world where computational power and data at our disposal both increase at an exponential rate.
Many authors are discussing the role of humans in decision making in the future of society. Analyses described in the book Machine, Platform, Crowd: Harnessing Our Digital Future by MIT economists Andrew McAfee and Erik Brynjolfsson show that human intuition, e.g. based on domain experience and expertise, is repeatedly beaten by data-driven AI algorithms based on historical data (read chapter 2: The Hardest Thing to Accept About Ourselves).
This means that our fast thinking system (as defined by Kahneman and referred to as system 1) which mostly relies on knowledge acquired over years of experience, is unlikely to play a significant role when making important business decisions in the future. In other words, the reality is that people’s judgment is not reliable! This is the counterintuitive and unpopular finding as explained by McAfee and Brynjolfsson in Chapter 2 of the book below.
In contrast, our slow thinking system (system 2) based on analytical reflection, e.g. the skills learned at school, although much less performant than our silicon counterparts, can play an important role because it brings an aspect that machines still lack: common sense. This is because we can make sense of the context and judge whether a decision, possibly based on the analysis of large amounts of data makes sense in a given situation.
An oft cited example is Uber’s algorithms setting a price surge during the crisis situation in 2015 in Paris triggered by the terrorist attack (which was rapidly overuled by human operators in the company). Other companies have been experiencing some issues with automated decisions. So many have put in place the mechanism for humans to override them in some circumstances.
Hence the key reason to teach Computational Thinking is to prepare our system 2 for coming decades of exponentially improving machine performance. In other words, it brings us the tools to maintain human relevance in the future of decision-making.
Building Blocks for Children’s Curriculum
The teaching of Computational Thinking starting from a young age should expectedly adapt to the child’s understanding of the world. Many curriculums have been proposed by authors such that Seymour Papert, Alan Perlis, Marvin Minsky, Jeanette Wing and others.
What stems from many curriculum proposals is an emphasis on basic building blocks of skills such as the capabilities for abstraction, pattern recognition, problem decomposition and algorithmic expression.
Abstraction is necessary to focus on the key aspects of a problem such that it can be clearly identified. This is perhaps one of the most difficult things to teach to children who, by nature, have limited experience. This lack can be palitated by emphasizing the teaching on a learning-by-many-example approach as opposed to suggesting a methodology (more on that below.)
Pattern recognition is necessary to identify how a problem can be solved using a known approach or model. For example knowing how to find prime numbers makes it easy to find the largest common divisor or the smallest common multiple of two numbers.
Problem decomposition is necessary to identify how an arsenal of computational methods can be used to solve a problem by addressing through subproblems. In the Sieves’ method, large sets of numbers can be eliminated because they share the same characteristics (even numbers, multiples, etc.). This is a simple form of problem decomposition.
Algorithmic expression, i.e. the recipe to solve the problem, is perhaps the most tricky, because it shouldn’t be necessarily conflated with computer algorithms as I suggested earlier. In the Sieves’ method again, an approach is to use a table to sort out data (the numbers). This is an intuitive representation for children since he does not require the computer approach to use formal expressions to order and repeat computational steps.
In addition to the above four concepts that are mainly dealing with establishing a process, it is essential to also add the notion of data and particularly how data exists in nature and how we capture it in a digital form.
This teaching is useful to make children cognizant that actions, in particular online ones, often equate to data creation. Furthermore, attached to the notion of data are concepts of openness and closeness.
Personally identifiable data is the unit that measures digital identity. Hence it is vital that data generation and capture settings, i.e. opened and closed are explained to bring to light the notion of data privacy at an early age.
Since the importance of data in society is quite recent, this topic is not explored thoroughly enough in the current literature in my opinion.
Learning Without Rules
In some of the Computational Thinking definitions that I read, it is sometimes mentioned (or could be misunderstood) that the algorithmic expression building block is a means to find rules to solve a problem. Finding rules is not necessarily the way to tackle the problem because often knowledge cannot be easily explained or captured using rules. This is something referred to as the Polanyi paradox, in honour of the British-Hungarian philosopher Michael Polanyi.
The paradox is that we cannot explain everything that we know: it is not possible to express clear rules for everything. The algorithmic approach of Machine Learning actually mimics the way children mostly learn, i.e. on the basis of reasoning (e.g. by correlation) over a large set of examples without the explicit input of rules. In other words, decision rules are deduced based on provided examples.
Now the fact that it is often difficult to interpret how Machine Learning algorithms return the results that they do (referred to as ML Interpretability) is quite an ironic twist (we would otherwise solve the Polanyi paradox). In short, attempting to understand fully how Machine Learning mechanics work is somehow akin to poking into our own consciousness.
We need therefore to be careful that Computational Thinking also includes the use of methods based on learning in addition to classical rule-based approaches. This is why data, in part as the mechanism to deduce the rules, is an essential part in teaching the topic. This is because data is not only consumed and transformed by the process but also used to shape its functioning.
This brings us back to the definition for Computational Thinking that I proposed earlier, i.e. the set of skills necessary to understand the nature of a problem so that it can be addressed using a computational approach. In it, assessing “ the nature of a problem” is to understand how data attached to the problem plays a role in finding its solution. In that respect, this definition becomes now much closer to that one provided by the Institute for the future, i.e. the ability to translate vast amounts of data into abstract concepts and to understand data-based reasoning.
The good part in all of that is, because learning by example is the modus operandi of children, grasping algorithmic expression as a component of Computational Thinking should be second nature to them when it relates to Machine Learning. But more generally this aspect of children’s learning approach should be leveraged when teaching the other components, i.e. the remaining concepts of abstraction, pattern recognition and problem decomposition that I briefly introduced earlier.
Finding a good learning dataset for Computational Thinking to feed to young brains eager to deduce their own rules is likely to be a challenge for every institution willing to include this essential topic in their curriculum. But it is an approach worth experimenting.
Thanks for reading. You can find many more articles on cybersecurity, innovation and other topics on this page.
Comments, Feedback: Laurent Balmelli (Twitter Laurent Balmelli)