How to think about the future of AI, data science — and humans
Envisioning the future of any domain in this era of AI poses a formidable challenge, all the more so when considering the specific implications for data science. As it stands, AI systems have advanced to a stage where they can analyze and understand the relationship between tremendous volumes of data with impressive speed — and then use these learned relationships to become adept at playing chess, identifying business solutions, and even composing music. However, deciding which games to play, which solutions to search for, or which music elicits beauty will most likely remain a human prerogative for the foreseeable future.
In the face of complex problems and uncertain futures, I often resort to models to make sense of things. As a lifelong musician, I find this quote from famed saxophonist Branford Marsalis useful in shaping my perspective on AI:
Let’s say you have an English professor who knows every word in the English dictionary and then you put him on a stage to do Shakespeare, or put him in the pulpit and everybody’s staring at him waiting for him to give a stirring sermon. How’s he gonna fare? Not very good, because it ain’t enough to know the words. You have to know how to deliver the words with a sound that’s relative to the period. You have preachers who don’t have a great vocabulary and they got people losing their minds because of the delivery. Sound creates the delivery, the thing that moves people. The thing that makes music — instrumental music — great to me is that the sound of an instrument can have an emotional effect on the listener…
The beauty of meaningful music lies in its ability to connect with the audience on a profound level. To be a great musician, you don’t need to master every note, have a thorough understanding of music theory or even an unparalleled ability to perform. A musician’s real skill lies in their ability to transform their knowledge and technique into something that resonates with other people — all of us — and the lives we lead.
Similarly, it’s important to identify “useful” data from the nearly infinite gobs of existing data, which in turn spotlights the role of data scientists. Like musicians who organize sounds into structured and consumable songs, data scientists have a critical role to play identifying the signal from the noise and transforming data into solutions that can solve problems relevant to people’s lives. In this new era, data scientists will be able to explore more questions and devise more solutions than ever before. When this happens the balancing act between “Is this challenge even possible to solve?” and “Is this challenge worth trying to solve now?” will shift. AI will simplify the threshold questions we ask about challenges. Instead of asking “can this problem be solved?” AI will help free us to concentrate on prioritization, and whether we should try to solve a particular challenge now or at a different time.
Before we delve into what the specific implications this has for data science, we must foster a deeper appreciation for the future roles of both AI and humanity. The best way to do this is by understanding the constraints of each. It is, after all, our limitations that give rise to our strengths.
The limitations of AI
We’ve witnessed AI progress and its potential seems boundless. It has revolutionized various domains, including image recognition and natural language processing. But AI isn’t flawless, and there remain areas where AI may be challenged to match human capabilities.
Take, for example, the case of guitar synthesis. Having played the guitar all my life and experimented with synthetic guitar sounds, I’ve noticed a conspicuous difference between the sound of a real and a synthetic guitar. The sound of a guitar is dictated not just by the strings and the instrument, but also by the musician’s unique playing style — the pressure on the strings, the strum’s speed, the fingers’ placement, and the emotional intent behind the play. Presently, AI struggles to fully replicate this level of depth and subtlety.
The discrepancy becomes even more pronounced when complex instruments and intricate audio mixes come into play. There’s a unique quality to live, human-produced music that AI has yet to successfully emulate. This underlines a critical limitation of AI — its inability to fully reproduce the complexity and artistry of human creativity and expression.
But that’s not all. AI solutions may not fully reflect the importance of simplicity and intuitiveness in designing for human beings. As an analogy, consider the example of car manufacturers reintroducing physical buttons, after many years of relying on touch screens. Physical buttons offer humans simplicity, reliability, and immediacy. They minimize human errors, are easier to use, and allow drivers to keep their eyes on the road. This isn’t a critique of technological progress, but a recognition of human needs and behavior.
Regardless of an AI solution’s sophistication, if it’s cumbersome to comprehend and use, its effectiveness diminishes. AI should not only resolve complex issues but also present its solutions in a user-friendly, intuitive way.
Here, the notion of “appropriate complexity” is relevant. For example, as data scientists, we should contemplate whether an intricate model, such as deep learning, is essential for a particular task. Could a simpler and more explainable model, such as linear regression, yield the same result? This consideration not only streamlines the process but also conserves resources and provides more reliable, reproducible results. Once again, this requires recognition and understanding of human needs and behaviors — a feat that AI may never truly accomplish.
The limitations of humans
At first glance, human limitations may seem like disadvantages, but within the complex interplay between humans and AI, they often transmute into unexpected strengths.
The human brain is the result of millions of years of evolution, finely honed for survival in a complex and uncertain world. Its strength lies more in its streamlined information processing ability than in raw computational power. For example, humans can’t remember everything. We could read the entirety of Wikipedia, but we wouldn’t be able to retain and recall all of it. This isn’t a deficiency; it’s an evolutionary adaptation. The necessity to be selective about the information we hold is essential for effective world navigation.
One way our brain prioritizes information is by associating emotions with our thoughts. Emotions act as labels, signifying the importance and relevance of various thoughts and ideas. When we undergo a positive or negative experience, our brain classifies it as something worth remembering or something disposable. Consequently, emotionally charged events tend to linger in our memories, while commonplace experiences often recede into oblivion. This emotional tagging system, while frequently burdensome, isn’t a defect; instead, it’s an advanced mechanism for rapidly sorting and prioritizing information based on its perceived significance. This system is vital not only for efficient information processing but also for shaping our perceptions, beliefs, and behaviors.
And this system evolved over time to put humans in the best position possible to survive (and ideally thrive) in the unique environment where we exist. Whatever idiosyncrasies we may have as humans, the constraints of our mind are precisely what equip us to be effective decision-makers. These limitations guide us to focus on the most relevant or significant matters, evaluate different options, and make decisions amid uncertainty.
Human thinking + machines that think
The paradigm of the attention mechanism, showcased in the landmark paper “Attention is All You Need,” offers a fitting analogy in machine engineering that mirrors our brain’s method of prioritizing information. However, a critical distinction exists between human attention and AI attention. While our attention emerged organically as a survival tool over millions of years, AI attention is strategically designed and implemented. These represent two fundamentally distinct approaches to understanding our world.
This divergence becomes evident when we encounter something unfamiliar in a text, struggle with comprehension, but eventually decipher it through practical application. The act of directly experiencing something and subsequently learning differs vastly from the process of learning through mere measurement. In essence, learning rooted in firsthand experience contrasts sharply with learning derived from quantitative evaluation. Our design limitations as humans facilitate a different, and often more nuanced, learning and understanding approach compared to artificially intelligent systems.
Despite its computational prowess, AI does not possess our human system for prioritizing and assessing information. It is adept at processing vast quantities of data swiftly, but it lacks our emotional tagging system, our ability to navigate complex social and cultural contexts, and our capacity for intuitive judgment — elements that confer meaning on our existence.
In this regard, our human limitations serve not only as assets in and of themselves but also as essential supplements to AI. Our finite memory capacity compels us to prioritize information, thereby directing our attention to what is most vital. Our emotions, despite their subjective and chaotic nature, offer an efficient means of evaluating the significance and relevance of information. They expedite decision-making in scenarios where the “correct” answer is ambiguous or involves balancing competing trade-offs.
Moreover, our limitations spur creativity and innovation. They motivate us to devise new methods, to think innovatively, and to concoct original solutions to intricate problems. They are the fountains of our ingenuity and adaptability — attributes still unrivaled by AI.
And it’s important to remember one fundamental principle: AI, in its inception, was largely developed to help solve challenges affecting human lives. Its purpose isn’t to pursue some abstract or platonic solutions unrelated to the human experience. Though seemingly narcissistic and self-serving, this notion can ground our expectations for AI. It reminds us that while AI can greatly augment our capabilities, it remains a tool designed to serve, complement, and enhance the human experience, rather than replace it. The intersection of human limitation and AI’s potential is where we can find a harmonious collaboration, serving as a beacon for our ongoing journey in data science.
What does this mean for data science?
“All truth is comprised in music and mathematics.” Margaret Fuller
In the grand orchestration of data science, AI and humans contribute different, yet harmonious elements. AI embodies the mathematical component, processing numbers with increasing accuracy and on a scale unfathomable to the human brain. We, the humans, represent music, infusing emotion, intuition, and creativity — elements that animate the raw data, metamorphosing it into knowledge, insight, and ultimately, wisdom.
The human-AI collaboration ideally amalgamates the best of both worlds: the computational strength of AI and the emotional intelligence of humans. It is through this partnership that we can fully realize the transformative potential of AI in data science.
As AI becomes increasingly incorporated into our work, it is likely to not only alter the tasks we execute, but also our problem-solving approaches and even our perception of data. With AI assuming responsibilities such as data cleaning, sorting, and basic analysis, data scientists can dedicate more time to high-level tasks like interpreting results, strategizing, and brainstorming the application of insights to business or scientific contexts.
AI can aid data scientists in discerning patterns or relationships in data that might otherwise be missed, fostering new lines of inquiry, or unearthing surprising insights. This encourages data scientists to perceive their data in interesting new ways.
In addition, collaborating with AI can inspire data scientists to reflect more profoundly on the assumptions and biases that can influence data analysis. While training AI models, data scientists must consider the possibility of bias in the data or the model, which could yield distorted or erroneous results. This necessitates a more critical and self-reflective approach to data analysis.
Given AI’s limitations, it is vital to discern when to employ AI and when to rely on traditional data analysis techniques. Understanding that AI is not always the optimal solution can conserve resources, avert unnecessary complexity, and optimize efficiency. For instance, when the data set is small or the task straightforward, traditional methods might be more efficient and effective. Conversely, when dealing with large, complex data sets, utilizing AI might be beneficial. Striking a balance between using AI’s capabilities and harnessing our human faculties is key.
Lastly, the inherent human ability to make intuitive judgment calls — as products of our evolution — provides us an advantage over AI. Our brains have evolved over millennia to make instant decisions in complex, dynamic environments, often in the absence of complete information. These heuristics or mental shortcuts, which we recognize as intuition, enable us to make decisions rapidly and efficiently.
And business leaders agree. In fact, Microsoft’s most recent Work Trend Index report reports that “analytical judgment,” “flexibility,” and “emotional intelligence” top the list of skills leaders believe will be essential for employees in an AI-powered future — all of which have deep implications for data scientists in particular.
Conclusion
In the evolving narrative of data science, both AI and humans have crucial roles to play. Each has unique strengths and limitations that the other can complement. As we advance further into this symbiotic relationship, we must keep in mind that technology and human intelligence are not in opposition, but rather, they are in dialogue. The computational ability of AI provides us with extraordinary tools for managing and interpreting vast amounts of data. Still, it’s the distinctly human faculties — emotion, intuition, creativity, and nuanced understanding — that enable the effective use of this data with context and deeper significance.
However, history serves as a potent reminder of our human fallibility. It underscores the fact that we, as a species, routinely make disastrous decisions influenced by the very human traits we value — our emotions, prejudices, and the cognitive shortcuts that have evolved to aid swift, though frequently suboptimal, decisions. It’s a historical truth that technology, while a tool for immense good, has also been a catalyst for unimaginable harm.
This is where AI distinguishes itself from previous technologies. Unlike destructive technologies of the past, which were incapable of dissent, AI can “talk back.” With this ability, AI also possesses a theoretical potential for devastation beyond anything we’ve seen — a sentiment recently shared in this 22-word statement by a group of AI researchers, engineers, and CEOs including Kevin Scott (CTO, Microsoft) and Eric Horvitz (Chief Scientific Officer, Microsoft). This intriguing dynamic calls for a relationship where technology is governed by humans, yet simultaneously has the potential to advise us against acting too impulsively or our human proclivity for error.
AI’s role will continue to grow, enabling us to undertake tasks more efficiently and discover patterns and insights that we might otherwise overlook. At the same time, it’s our human limitations that will keep us focused on what’s important, inspire creativity, and drive innovative problem-solving. While humans need to guide technology’s evolution and application, there’s a fascinating prospect that AI, armed with the ability to reason, could start to inform — and advise — humans more effectively, mitigating some of our innate biases and errors.
Embracing the partnership between humans and AI allows us to fully exploit the strengths of both. This collaboration represents not merely the future of data science but the present reality that we must navigate with wisdom and foresight. As data scientists, it is our responsibility to harmonize these two components of the grand symphony of data science: the math and the music. Only then can we convert the raw data into meaningful narratives that guide our decisions and shape the world around us. In this intricate dance between humans and AI, we are choreographing the future of data science. As the dance evolves, so does our understanding, pushing us forward into a future filled with untold potential and yet-to-be-discovered insights.
Alex Farach is on LinkedIn.