“To boldly go where no one has gone before.”
The famous opening line of Star Trek. The series followed the voyages of the starship USS Enterprise as she explored “strange new worlds, to seek out new life and new civilizations.”
I think we are all on a similar voyage right now. And we don’t have to engage a “warp drive” to experience new life and new civilizations. Life — here on Earth — is changing rapidly due to technological developments.
A “new civilization” is already emerging around us. And, for sure, this new world is, indeed, very strange.
New Life, New Civilizations
Of course, humans have always had to deal with technological change. But in the past, we had long product lifecycles and a relatively steady industrial environment. We had time to adjust to the emergence, deployment, and dissemination of new technologies.
Today, however, innovation cycles are much shorter. We are experiencing an exponential growth in technology. And the future promises more of the same — it is now impossible for anyone to predict accurately where we are going to be in even the short to medium term.
5 Artificial Intelligence Trends To Watch This Year | Data Driven Investor
Expect a broad range of AI-enabled significant advancements in 2019. From Google searches to handling complex jobs like…
Perhaps most significantly, we are witnessing massive developments in the field of “artificial intelligence.” And we are only just at the beginning. The “Internet of Things” (IoT) will accelerate the developments and significantly improve AI applications.
Think about it. IoT with improved 5G connectivity and the proliferation of sensors will lead to significantly more high-quality data. More and better data will produce better AI. In turn, better AI will make the IoT even more prolific, useful, and accessible.
Soon, we will interact with machines without even realizing it. IoT and AI are helping companies move quickly from merely making products to offering algorithmically driven and personalized services to consumers.
And, in general, we love these services. Even those who claim to hate the connected world don’t wish to be completely disconnected. We increasingly want all our devices to be smart and integrated. Such smart connectivity offers convenience, speed, and a better experience. Of course, we also want the applications to be safe and secure, but most people seem willing to make a trade-off to access these new services.
But, Are We Ready for the Revolution?
I was attending a conference on AI and regulation in Japan last week. Most of the participants were regulators, lawyers, or law professors.
Listening to the speakers, the answer was a definite “No, not yet.”
Of course, the advantages of AI in the areas of health, energy, environment, and financial inclusion were recognized by everyone. But the main focus in the room was on the challenges AI creates — challenges such as cybersecurity, hacking, and misuse of AI (by governments and big tech companies).
AI bias was the most discussed issue. References were constantly made to biases in data sets and biases in the design of the algorithms.
Perhaps unsurprisingly, given the participants, everyone was focused on the risks that are being created.
And in thinking about how to respond to these risks, the room was full of the usual “legal” solutions. To protect vital public interests, AI must be explainable, transparent, accountable, trustworthy, and human-centric.
Most participants agreed it isn’t necessary to reinvent the wheel. We may not realize it, but there are already a lot of rules and regulations that are applicable to AI. We should focus on gaps in the existing framework. And, of course, the question of improving the “enforceability” of AI-related regulation also came up.
There seemed to be a clear consensus that we need more centralized oversight in the new world of AI. This kind of “solution” can be found in the many different guidelines and discussion papers that are being produced by governments and international organizations all over the world.
Problems . . .
This all sounds wonderful in theory. But the fact is that we are stuck. The more honest participants made it very clear that we have been talking about the same old “solutions” for years. And yet, we don’t seem to get anywhere. There is no progress in the discussion — just the same old ideas and general proposals that are constantly recycled at events of this kind.
Having sat through two days, I see multiple, interconnected problems with the existing discussion. Here are five:
Old World Solutions Don’t Cut it
We tend to use “old world” concepts, models, paradigms, and government-led principles to explain and regulate the “new” world of AI. Too often, these old world solutions are inappropriate to the very different realities of today.
For instance, the transparency of AI algorithms and the responsible use of data is much easier said than done. It will be relatively costly (and — increasingly — even impossible) to fully understand, monitor, and challenge algorithms. And even if transparency is feasible, there may be severe downsides in demanding such openness. Consider the risk of the manipulation or hacking of “transparent algorithms.”
In short, we need to come up with faster, smarter, and more creative ways of regulating AI applications. And the regulation of the future will need to be designed and embedded in the technology itself.
General Solutions Are Not the Answer
Too often, we talk about and try to regulate AI technology “in general.” This isn’t going to work either. AI has different applications in different contexts (e.g., health, agriculture, fraud detection, etc.). What is acceptable in one area might be hugely problematic in another.
Also, it doesn’t make sense to continue to talk about AI in isolation. AI will work in tandem with other technologies, such as IoT, blockchain, robotics, etc.
Every use case is different. And every use case requires a distinct solution or, at least, consideration of a distinct solution.
Nobody Seems to Know Enough (Anything?) About the Underlying Technologies
We need more digitally savvy people to participate in the “regulation” discussion. And here I don’t only refer to real AI-specialists, computer scientists, and mathematicians. We also need people who understand how AI — in combination with other emerging technologies — will impact society in the next decade.
How will AI impact consumer behavior? What are companies already doing to make AI fairer and more transparent?
At the moment, the lawyers and regulators are talking about issues that they don’t fully understand. I am not saying that they need to become technologists. But they do need a lot more technologists in the room.
Nobody Appreciates the Power of Co-Creation
We need more “co-creation.” More industry-specific input is required in the design of regulatory solutions. Conferences and other meetings are still too restricted to people with a particular disciplinary background.
Multidisciplinary discussions, involving diverse stakeholders, are a must. Too often, however, the business perspective is ignored or treated with suspicion.The discussion is characterized by a pervasive anti-corporate mood.
The Focus is on Government, Not Business or Society
There is a tendency to focus on “centralized regulation.” Perhaps decentralization is a better way of “regulating” the potential issues with AI.
There are several arguments in favor of a more decentralized approach. Given the complexity of the underlying technologies, it makes sense to involve those stakeholders with the best knowledge of those technologies — the companies (large and small) that are driving AI digital revolution.
Moreover, we can already see that AI researchers who work at Big Tech companies become more actively involved in AI ethics and regulatory discussions. They are particularly interested in the objectives of the users of the AI (research, commercial, military).
We could even go a step further and argue that we need a more radical decentralization of business and society. Such a disintermediation approach would prevent too much data from being concentrated in only a few hands (governments/big companies).
What’s Needed Now?
The key takeaway from my trip to Japan is that the current discussion around artificial intelligence is just too narrow. IoT, AI, and other emerging technologies will lead to a vast economic, social, and cultural transformation in our society. We are on an Enterprise-like voyage to a new world (whether we like it or not). It is a new world with unknown opportunities and perils. And we have to be much smarter about these perils and continually ask whether we need protection and what kind of protection is most appropriate.
And, for me, any such solutions will need to be new, specific, technological, co-created and business driven.
Crucially, however, we also have to be careful not to interfere too much with the current development of emerging technologies if we want to solve the most pressing global problems in today’s world (e.g., the environment, inequality, poverty, etc.).
So, are we ready for the AI revolution? Not yet. But, if we think and act smart, we can be.