AI on the Frontline: Navigating the Emerging Threat Landscape and Building a Secure Digital Future

ReadyAI.org
ReadyAI.org
Published in
4 min readJul 5, 2023

Building the next-generation training for AI/Cyber Professionals

By: Rooz Aliabadi, Ph.D.

The rising influence and potential threats posed by artificial intelligence (AI), particularly Generative AI, swiftly surpass even the significant geopolitical risks of U.S. adversaries. This change marks a pivotal point in our history in the volatile and constantly evolving digital security landscape.

In the Intelligence Community’s 2023 Annual Threat Assessment, the Chinese Communist Party was identified as the most formidable threat to U.S. national security, primarily due to its relentless pursuit of advanced technologies, specifically in cyber warfare and quantum computing. However, the ensuing months witnessed a surge in threats linked to AI and generative AI, an apparent deviation from the primary focus on China. This alarming trend prompted numerous former U.S. leaders, currently engaged in the private sector, to voice their concerns about the widening threat landscape due to AI proliferation and misuse.

While China’s tech advancements warrant attention, the risks posed by the rapid development and deployment of AI are increasingly becoming a significant challenge in their own right.

We’re witnessing an alarming trend of organizations integrating AI into their daily operations at an unprecedented pace. This includes adopting sophisticated AI-driven tools like chatbots ChatGPT and Google Bard. What’s particularly disconcerting about this trend is how these AI tools leverage data. Corporations are in a fierce race to integrate large language models (LLMs) that rely on neural networks into their systems. This incorporation does serve many practical purposes, from aiding clients in hotel booking to summarizing meeting notes. However, the inherent risk of this deepening AI-human interaction cannot be ignored. As the LLMs process immense amounts of data to optimize their networks, seemingly innocuous queries could unveil vulnerabilities.

Despite being adopted by frontline workers to enhance work efficiency, public LLMs like ChatGPT often contain proprietary, sensitive, or confidential information. If misused, the fallout could be catastrophic. A startling revelation by Cyberhaven reported that over 10% of employees evaluated had used ChatGPT at work, with nearly 9% having input their company’s confidential data into chatbots.

This reality opens the floodgates for adversaries and criminal organizations to exploit these technologies to gain vital information on the country’s critical infrastructure, potentially leading to devastating cyber-attacks.

While companies are making concerted efforts to prevent mishandling of confidential data through AI tools, current protective measures are insufficient. Numerous corporations have restricted access to such technologies, while others have cautioned their employees against entering company data into these AI systems.

Nevertheless, the use of AI-driven searches is skyrocketing, with ChatGPT, developed by OpenAI, attracting over 100 million monthly active users shortly after its launch, with over 300 applications currently using the technology.

The public sector, too, has heavily adopted chatbots, with cities like Los Angeles planning to use AI to streamline bureaucratic tasks. Despite the many benefits of AI, there are grave concerns regarding transparency, accuracy, and vulnerability to hacking.

The risk of data manipulation escalates as LLMs become accessible to a broader population, raising concerns about their misuse in propagating false information and disinformation. These concerns are shared by many who warn about the destabilizing potential of AI and other technological advancements on society and democratic systems.

CCBC / ReadyAI Cyber/AI Parntership

These challenges underline the pressing need to adapt our cybersecurity training to reflect these emerging threats. That’s why our team at ReadyAI is partnering with the Community College of Beaver County to develop one of the most advanced AI/Cyber programs in the U.S. and train the next generation of AI/Cyber professionals as part of the Build Back Better initiative in Pennsylvania. Our collective efforts will ensure a secure digital future for all.

This article was written by Rooz Aliabadi, Ph.D. (rooz@readyai.org). Rooz is the CEO (Chief Troublemaker) at ReadyAI.org

To learn more about ReadyAI / CCBC collaboration on AI/Cyber Program, visit www.readyai.org or email us at info@readyai.org

--

--

ReadyAI.org
ReadyAI.org

ReadyAI is the first comprehensive K-12 AI education company to create a complete program to teach AI and empower students to use AI to change the world.