AI New Worlds, Old Jobs and the Future of AI.

The jury is still out on the impact that AI will have on our world. Will the benefits outweigh the risks? Our recent conversation with Andrew Gardner and Keith Rayle (respective heads of AI from Symantec and Fortinet) concluded with an interesting discussion about what we might see in the future.

CLX Forum
CLX Forum
5 min readAug 27, 2019

--

Creating New Worlds

According to Andrew Gardner, AI is rapidly surpassing humans at creating and altering digital content, ranging from text to photos to videos to machine data and more. “Imagine whole new ways of interacting with devices and your environment, for example, saying just a few words and ending up with a complete blog post or document, far faster than you could write it yourself. Or imagine digital studios and creators being able to write out a short ‘script’ that would be rendered as a commercial-quality movie or game or interactive experience. These are new vistas for human computer interaction that are happening now as a result of AI. They grant us unprecedented expressive freedom and abilities,” Dr. Gardner shares enthusiastically. “In the long run, AI-driven or assisted content development will undoubtedly make us more productive and creative, but there will be hurdles in the journey,” he cautions. “One clear downside, of course, is that not all new content and expression is benign. We’re experiencing this acutely, today, as fake content casts uncertainty and taint on our political and journalistic processes, and exposes us more directly to the fundamental security weakness of social engineering.” Keith Rayle agrees, “the amount of information that we’ll have access to, thanks to AI capabilities, is going to be mind-boggling. But there’s a worry about what we’re going to do now, as people, when existing creative processes change. This is going to become a profound topic.”

Is my job at risk?

Considerable attention has been given to the impact that AI will have on the workforce. “The most straightforward and disruptive impact of AI on the workforce will be through job displacement as a result of AI-enabled automation and scaling,” notes Andrew Gardner. “AI can learn to reproduce many job tasks, even those requiring perceptual- or motor skills, or physical dexterity, and AI-physical systems like robotics are advancing rapidly. A good question to ask is: which jobs are safe?” he explains. “I’d bet on jobs that leverage the value of the human element, those with characteristic requirements like: deep, subtle or specialized expertise; uniquely creative aspects; de novo problem solving skills, subtle or sophisticated interaction with people, and critical decision-making.” Keith Rayle questions what these changes will mean for people whose jobs involve manual labor: “Now that we’ve got a system that can operate railroad machinery and repair the tracks, all by itself, without anyone watching it, what happens to all those jobs? This is what frightens people. What do we do next?”

It isn’t only physical jobs that might be affected by AI. Rayle thinks that “many roles will become digitally commoditized. Think about education, and the textbooks that are out of date as soon as they’re printed. Now imagine having an AI that collects the latest knowledge and methods. It is, not just about algorithms, but about geopolitical landscapes, and the shifts and changes in how societies are interacting on a local versus global level. Imagine international interactions being determined by AI systems leveraging the all historical and current information. Thus AI becomes more interesting in terms of its capabilities, it’s probably inevitable that we will hand over more decision-making power to these systems.”

Who should have control?

While there is a growing consensus that governments should do more to regulate the development of AI, Andrew Gardner also argues against leaving AI in the hands of the elite. “You certainly don’t want technocrats and billionaires making all the decisions! And that’s one of the problems now, because few people outside of those elite groups actually understand the technology and its usage, or have the capital to drive all of big AI changes. The barriers to understanding the nuances of these decisions make the complexities of the issues regarding climate change look profoundly basic.”

The power to change the world

Keith Rayle believes that we may be at a tipping point between greed and our species’ self-actualization. He asks, “Will we develop AIs to the point where we don’t have to worry any more about food because the efficiencies of crop growth and distribution are managed to complete efficiency, eliminating global hunger and malnutrition? Or will greed and corruption take over, and limit our access to the beneficial uses of AI so that the entity controlling it gains more power, or more resources?” Rayle’s view of our future with AI is hopeful, but not optimistic: “There’s a real chance that the path we’re headed down may be focused on the hoarding of resources. The next few decades will be critical. If we can survive these next 30 or 40 years as a species, moving closer and closer to the point that AI can support the entire human race, we will win. But I don’t know if, as a species, we’re capable of that.”

The Dawn or Doom conference, held annually at Purdue University, examines some of these larger questions about whether technology is enabling society, or if it hastening our demise. Rayle, who spoke at the 2018 conference, believes the next few years will show us what direction we’re moving. Andrew Gardner points out that people on both sides of the debate should recognize that the technological impact of a small, local bad decision has grown over time: “We have so much more capability, now, to act unwisely or terribly. Want to change a government? Destroy an energy grid? Ruin an ecosystem? Thwart an economy? These are all possible consequences of local decision-making. It’s not all doom-and-gloom, however,” he reminds us. “We also have increased awareness. And surprisingly (in a good way), it seems that our tolerance for grossly negligent, poor or malicious human decision-making is decreasing. We’re tackling the hard questions. Can our society evolve to make AI more responsible — fair, unbiased, ethical, interpretable, privacy-preserving, etc.? What’s required for the control, regulation, oversight, development and incentivization of AI systems? What forethought will help us address risk, especially from unanticipated catastrophic consequences of AI?”

Finally, as Keith Rayle reminds us, the growing number of radicalized groups in our world means just a few individuals could do a lot of damage. Rayle asks us to consider the Chinese blessing/curse, “May you live in interesting times.” Whatever AI ultimately provides as determined by those making decisions on its use, it will definitely be an interesting ride for us all.

Download your FREE copy of Canadian Cybersecurity 2020. Available on October 9th: https://secure.e-ventcentral.com/event.registry/CanadianCybersecurity2020/

Check out the CLX Forum blog and follow the CLX Forum on LinkedIn and Facebook to keep up-to-date with the latest happenings in the world of cyber security.

Interested in becoming a contributor? If you’ve got a topic which you feel is important to your peers, we want to hear from you! Get involved today by visiting: https://www.clxforum.org/get-involved/

--

--

CLX Forum
CLX Forum

The Cybersecurity Leadership Exchange Forum (CLX Forum) is a thought leadership community created by Symantec.