Kayne McGladrey Of Hyperproof On Cybersecurity Compliance in the Age of AI Threats
An Interview With David Leichner
Align on the Risks: Companies benefit from agreeing on and documenting their business risks associated with AI. A shared understanding improves risk management, enhances collaboration across different teams, and helps in mitigating potential risks more effectively. To agree on the risks associated with AI, executives should use a cross-functional approach that involves key stakeholders from business operations, legal, IT, and compliance functions. Companies should also establish processes for ongoing monitoring, assessment, and enhancement of AI risk management strategies to adapt to changing business processes and emerging regulations.
Compliance with regulatory standards and industry-specific guidelines for product security is an indispensable part of cybersecurity. In an age where malicious AI poses a significant threat, how do organizations ensure their product security strategies are not just effective, but also fully compliant? As a part of this series, I had the pleasure of interviewing Kayne McGladrey.
Kayne McGladrey, CISSP is the field CISO for Hyperproof, a senior member of the IEEE, and the author of the GRC Maturity Model. He has nearly thirty years of experience in cybersecurity and has served as a CISO and advisory board member, and focuses on the policy, social, and economic effects of cybersecurity lapses to individuals, companies, and the nation.
Thank you so much for joining us in this interview series! Before we dig in, our readers would like to get to know you. Can you tell us a bit about how you grew up?
Throughout my childhood, I sincerely believed that I was going to be in musical theatre productions on Broadway. This was a sharp contrast from what the high school guidance counselor recommended, which is that I should be a lumberjack. It’s still unclear how playing D&D, playing supporting roles in musicals, and working at a McDonald’s in high school led the counselor to recommend I take a job outdoors. In my first semester of college, I realized that Rent wasn’t just the name of a musical, but something that I’d need to pay monthly. Thankfully, the improv skills that I’d learned helped after I dropped out at the end of the first semester, and I got a consulting job as a systems administrator with an employer that took a chance on me.
Is there a particular story that inspired you to pursue a career in cybersecurity? We’d love to hear it.
One of my first contracts was with a government agency that handled people’s pensions and retirement funds. They’d moved into a new facility and data center, and that data center was behind several high-security doors. However, they also wanted to show off their commitment to technology and had one of their new SPARC servers — about the size of a dishwasher, on castor wheels — in their lobby, but connected to the data center. Their lobby, which had an ADA-compliant ramp leading to it, no concrete bollards, and regular glass doors that weren’t shatter-proof, because they hadn’t adequately considered physical threats. They had a terminal on top of the server that didn’t require a username or a password; anyone could walk up and theoretically change their monthly pension payment, or someone else’s. This lax approach to the financial long-term security and stability of their former employees worried me, as I realized the potential for both intentional and accidental harm. This led me to take a more security conscious approach, although we didn’t call it ‘cybersecurity’ in the 1990s.
Can you share the most interesting story that happened to you since you began this fascinating career?
I was part of an international team that was called in, post-breach, at a financial services organization overseas. Prior to flying out, I was on several calls with the client, and one question they insisted they’d answer only after we arrived was about who had access to the servers. I arrived at their office building after an overnight flight and was struck by how their lobby had a full bar next to the reception desk. They later told me they had a disco in the basement of their building, and their adjoining building had a Michelin-starred restaurant which they also owned.
The team assembled in a large meeting room, and after pleasantries, we set about to work, and I asked the same question about who had access to the servers. The client’s lead system administrator guiltily said that they had used the root account to login. I wasn’t clear on what he’d meant, thinking he meant the attacker had compromised the root account’s password, so I pushed for clarity on who else had access to the servers. “No, we all use the root account to login,” was his reply. I still thought he must be talking about the systems administrators, so I asked him to show me the /etc/password file for one server, and unfortunately, the root was the only listed interactive login user. Disbelieving, I asked to see a production server, because I hoped he was showing me a test or development server.
No. All the employees of the company would log into the servers using the root password, and all the servers shared the same password. And because this was financial services, this meant that the password for their trading desk, as well as the source code that ran their global trading desk software, was posted on yellow stickies in case a trader forgot. The root password had also never been changed.
What surprises me most about this story is that’s not why they brought us in. The company didn’t bother with keys or keycards, and an enterprising attacker had walked onto one of their trading floors, executed several trades, pocketed a million dollars in profit, and walked away. And repeated that two more times at two other global locations with no one noticing. We’d been called in to make recommendations on how to prevent the incident from happening in the future, and when we delivered those recommendations, they spanned three binders of organizational, policy, and security changes. Unsurprisingly, this included having employees use named accounts instead of using the root account to login.
You are a successful leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?
Communication skills are probably the basis of where I am today. Most cybersecurity professionals have solid technical skills and enjoy talking to other professionals about those technologies. However, they struggle when asked to explain what they do to a new manager, or to a director, or when thinking of how to create a presentation for a non-technical audience. As I’ve moved through roles, I’ve had the challenge of learning how to present legal, regulatory, and technical data to executives who have MBAs and a minimal understanding of cybersecurity. Communication skills are important, whether someone wants to pursue a management track, or just become a distinguished engineer.
I’m a naturally curious person. This helps as I’m entirely self-educated, and I’ve frequently had to work very hard to learn a lot of information in a short amount of time. Early in my career, I didn’t know that software engineering was a discipline with metrics and could reduce vulnerabilities, because that wasn’t a topic covered in my Introduction to Theatre college class. Once I learned about it, though, I learned to teach it because I realized not all the developers around me could measure code quality. Similarly, I knew little about corporate finance until I had to put together my first departmental budget for my team. Today, I spend a substantial amount of time following legal, regulatory, and contractual trends so that I can interpret and explain how these potentially affect corporate strategies for cybersecurity and privacy.
Finally, I’m a naturally social person. One of the easiest recommendations I give to new CISOs is to go have coffee with their company’s general counsel, and then their CFO, and to just work through the executive team. Knowing people’s voices, habits, expressions, and traits has helped me in the highest-pressure situations I’ve faced. For example, knowing what your attorney sounds like when they join a 2:00 AM incident response briefing call helps. Knowing how people want you to communicate with them matters. I worked with a VP who only wanted the facts, not opinions, so meetings rarely went over five minutes. Another VP at the same firm wanted to socialize, catch up, and would often only leave two minutes for actual facts in an hour-long meeting. Knowing these character traits helped me prepare for meetings with them so that I’d meet their needs.
Are you working on any exciting new projects now? How do you think that will help people?
I wrote a book this year, “The GRC Maturity Model”, in response to the increasingly difficult regulatory landscape, but also to help CISOs who are facing unprecedented civil liabilities from just doing their jobs. I watched for nearly thirty years as cybersecurity was treated as a ‘one and done’ project, not a process or a core business competency, and I’m seeing the same pattern with GRC. Since it was published, I’ve been giving talks about how companies can invest in GRC as a key business competency to de-conflict relationships with the internal audit team and the cybersecurity team. This also helps companies make trust in cybersecurity a competitive, strategic advantage, not just a cost center.
Ok super. Thank you for all that. Let’s now shift to the main focus of our interview. How does the emergence of malicious AI threats impact compliance requirements for organizations? Are there specific regulations or standards that address the unique challenges posed by AI-related security threats?
The emergence of malicious AI threats affects compliance requirements by pushing organizations to improve data protection, privacy, and security. According to The Impact of Technology in 2025 and Beyond: An IEEE Global Study, 48% of global technology leaders said a top potential use for AI applications would be real-time cybersecurity vulnerability identification and attack prevention, while 41% expect their company to start implementing robotics cybersecurity into operations. Organizations must enhance their existing risk management and cybersecurity practices to counter these threats, which may include regular audits, updating existing controls, and enhancing collaboration for threat information sharing. For example, a malicious AI threat that would require more robust risk management is the use of AI to generate deepfake audio or video content. This technology can impersonate trusted executives or colleagues, in order to manipulate or deceive targets into revealing sensitive information or allowing fraudulent transactions. This type of threat requires organizations to implement advanced verification processes and educate employees on recognizing and responding to such tactics. It’s also continuing to affect the previously settled password reset processes, as help desks struggle to verify the identities of unknown callers who’ve “forgotten their password.”
From a policy perspective, the EU AI Act attempts to address AI-related security threats by prohibiting the creation or operation of AI systems that present an “unacceptable” risk. This includes systems used for social scoring or those employing deceptive techniques that can materially distort behavior and cause harm. By banning these high-risk applications, the Act aims to protect fundamental rights and prevent the deployment of AI systems that could pose significant security threats. However, this ignores the fact that adversarial nation states and criminals have no interest in following the EU AI Act. In the US, various states have laws that address some of the risks associated with the use of AI. At the Federal level, the FCC has applied existing laws, like the Telephone Consumer Protection Act, to AI technologies that generate human voices. I expect we’ll see litigation leading to the retroactive adoption of legislation in this space.
Can you provide an example of a compliance framework or approach that organizations can adopt to effectively address security concerns arising from malicious AI? How does this framework help organizations mitigate risks and stay compliant?
In terms of frameworks, ISO 42001 helps organizations mitigate risks by providing guidelines for establishing a robust AI management system. It emphasizes risk assessment, ethical AI practices, and continuous monitoring, ensuring that AI systems are secure and aligned with regulatory requirements. By following this framework, organizations can identify potential threats, implement necessary controls, and maintain transparency, which aids in compliance with emerging laws and standards. By comparison, the NIST AI Risk Management Framework (RMF) addresses inbound threats caused by the external, malicious, intentional use of AI by threat actors. It provides guidelines for managing AI risks, including those posed by external threats, by promoting trustworthy AI development and use.
OWASP has also developed resources for AI risks, including the OWASP AI Security and Privacy Guide. This guide provides a framework for identifying and mitigating security and privacy risks associated with AI systems. It helps organizations understand potential vulnerabilities and implement best practices to safeguard AI applications throughout their lifecycle.
There is also an AI Risk Repository, which serves as a comprehensive database and taxonomy of risks from artificial intelligence. The AI Risk Repository is a collaborative resource for identifying, categorizing, and managing AI-related risks. It includes a living database of nearly 800 risks extracted from various taxonomies and provides a structured approach to defining, auditing, and managing AI risks. This repository helps organizations understand and address potential AI-related threats, including those from external, malicious actors.
Finally, there are also sector-specific resources. For example, the New York Department of Financial Services (NYDFS) recently released a letter on October 16, 2024, providing guidance on cybersecurity risks associated with artificial intelligence. The letter addresses risks such as AI-enabled social engineering, AI-enhanced cybersecurity attacks, exposure of nonpublic information, and vulnerabilities from third-party dependencies. It also outlines strategies for financial services firms to mitigate these risks.
In the context of compliance and regulatory requirements, what are the key considerations for organizations when deploying AI systems? How can organizations ensure that their AI deployments align with relevant compliance standards and guidelines?
When deploying AI systems, organizations must consider several key factors to ensure compliance with regulatory requirements. While 58% of technology leaders expect AI to be the most important area of technology in 2025, 91% agree that there will be an AI reckoning focusing on what the technology can and should do. They need to ensure that AI operates within ethical and legal standards, particularly concerning privacy, data protection, and non-discrimination. Organizations must also prevent potential harmful outcomes, such as biased decision-making, misuse of personal data, or unintended consequences. Establishing accountability for AI decisions and actions is essential, especially in highly regulated areas like healthcare, finance, and law enforcement. Regular audits and evaluations of AI systems are necessary to mitigate risks and ensure reliability and integrity. Similarly, identifying and rectifying biases in AI systems is important to prevent discriminatory outcomes. And maintaining comprehensive documentation of AI risk assessment processes and mitigation strategies is vital to show compliance and accountability to regulators.
Multiple state-level laws affect companies deploying AI systems. For instance, the California Privacy Protection Act regulates automated decision-making and requires businesses to disclose if they share personal information with third parties, including AI platforms. The Illinois Biometric Information Privacy Act imposes strict requirements on the collection and use of biometric data, which can affect AI systems using such data. Colorado has enacted the Colorado AI Act, which will require disclosure to residents interacting with AI systems and impose additional requirements on high-risk AI systems. Virginia’s Consumer Data Protection Act gives residents the right to opt out of automated processing for profiling purposes. Connecticut and Virginia have similar laws that call for opt-in consent for secondary uses of personal information, which may include AI learning sets.
Several key laws and regulations affect organizations deploying and using AI systems in the EU. The General Data Protection Regulation (GDPR) requires organizations to obtain consent before using personal data for automated decision-making, affecting AI systems that process personal data. The EU AI Act introduces a risk-based approach to AI regulation, categorizing AI applications into prohibited, high-risk, and low-risk categories, with specific obligations for each to ensure fundamental rights and safety. Finally, the updated EU Product Liability Directive (PLD) extends liability to digital products, including AI, holding producers accountable for damage caused by defective AI systems, regardless of fault.
Organizations can ensure their AI deployments align with relevant compliance standards and guidelines by implementing several strategies. First, they should incorporate AI risk assessments into their risk management methodology, inviting stakeholders from compliance, IT, legal, and HR to weigh in on AI risks such as privacy, security, fairness, and transparency. They should also maintain comprehensive documentation of AI risk assessment processes and mitigation strategies to demonstrate compliance and accountability to regulators. Organizations can adopt standards like ISO 42001, which provides a framework for aligning AI systems with global ethical, security, and transparency benchmarks. Regular audits and evaluations of AI systems are necessary to ensure reliability and integrity, and organizations should establish clear AI policies that define acceptable use and promote transparency and accountability.
Companies can further operationalize their AI compliance efforts by leaning into automated evidence collection and testing. Automation enhances efficiency within compliance tasks, such as tracking deadlines and managing compliance-related data, which are crucial for maintaining adherence to standards. By integrating advanced analytics and automation technologies, organizations can reduce reliance on manual processes, improving the continuous management of risk and compliance. This approach allows for more proactive risk identification and ensures that compliance activities are conducted efficiently and effectively.
Are there any specific compliance challenges that organizations commonly face when dealing with malicious AI threats? How can these challenges be overcome, and what steps can organizations take to enhance their compliance efforts in this area?
Organizations face several compliance challenges when dealing with malicious AI threats, including ensuring data privacy, managing algorithmic bias, and maintaining accountability. Large volumes of data are necessary for AI systems, leading to significant data privacy risks. Adversaries can trick AI models into executing malware or bypassing security controls, leading to potential breaches. AI models used to analyze user behavior may need access to personal information, potentially violating data privacy laws if shared with third parties. Finally, adversarial attacks can manipulate input data in subtle ways that cause AI models to make incorrect predictions or decisions, effectively bypassing existing safeguards.
These scenarios become compliance challenges because organizations must ensure that their AI systems adhere to existing data protection regulations and maintain robust security measures. These include the GDPR in the EU, which mandates strict data protection and privacy measures. In the U.S., while there is no comprehensive federal privacy law comparable to the GDPR, various state laws unrelated to AI require organizations to protect personal information and notify individuals of data breaches. These laws impose “reasonable” data security measures and allow enforcement by state attorneys general.
To address these compliance challenges, organizations can implement several key strategies, with guidance from the GRC maturity model. Developing structured compliance metrics is essential for effective monitoring and analysis of compliance performance. Comprehensive compliance training programs help to educate employees on requirements and can create a culture of ownership. Integrating compliance tools within specific projects or enterprise-wide enhances efficiency and consistency, while automated compliance assessments at key project milestones ensure adherence and identify areas for improvement. The GRC maturity model aids in these efforts by providing a framework for continuous improvement and strategic alignment of compliance processes, ensuring organizations can effectively manage their compliance challenges and adhere to legal obligations.
Collaboration between compliance teams and cybersecurity professionals is crucial in ensuring effective security measures against malicious AI. How can organizations foster collaboration between these two teams to address AI-related threats while maintaining compliance with relevant regulations?
Organizations can foster collaboration between compliance teams and cybersecurity professionals by creating cross-departmental strategies and coherent tactics to achieve enterprise risk management goals. This involves developing teams that work together to protect data and comply with regulations. Regular communication, joint training sessions, and shared responsibilities can enhance understanding and cooperation. Additionally, establishing a framework for using AI tools and involving both teams in designing and approving AI policies can ensure alignment with compliance and security objectives.
Automation can further enhance collaboration between internal audit and cybersecurity teams by reducing inefficient processes. Automated tools can help in continuous monitoring and reporting, reducing manual effort and allowing both teams to focus on strategic tasks. Automation can also help with data sharing and analysis, providing real-time insights into security and compliance status. This allows for both teams to work from a unified platform, ensuring consistency and accuracy in audits and security measures.
Finally, using the GRC maturity model can help improve collaboration between internal audit and cybersecurity teams by providing a structured framework to assess and enhance an organization’s existing governance, risk management, and compliance processes. The model encourages breaking down silos, fostering communication, and aligning strategies across departments, ultimately leading to more efficient and effective collaboration.
What are your “5 Things We Must Do To Protect From AI-Powered Cyberattacks” and why?
1. Align on the Risks: Companies benefit from agreeing on and documenting their business risks associated with AI. A shared understanding improves risk management, enhances collaboration across different teams, and helps in mitigating potential risks more effectively. To agree on the risks associated with AI, executives should use a cross-functional approach that involves key stakeholders from business operations, legal, IT, and compliance functions. Companies should also establish processes for ongoing monitoring, assessment, and enhancement of AI risk management strategies to adapt to changing business processes and emerging regulations.
2. Leverage the GRC Maturity Model: Organizations should use the Model to align their risk management with business objectives. This improves IT governance and ensures robust cybersecurity measures to better protect against AI-driven threats based on agreed-upon business risks. Additionally, a mature GRC framework streamlines processes, reduces inefficiencies, and ensures compliance with regulations. This can minimize delays and obstacles when bringing products to market.
3. Automate Control Testing: Organizations should implement automated control testing to continuously assess and validate the effectiveness of deployed security controls. This helps in identifying weaknesses and ensuring that defenses are up-to-date against AI-powered cyberattacks. Automation also helps with scalability, and to ensure controls are consistently applied and documented, which helps with compliance with regulatory requirements. This evidence can further be repurposed in investigations with market regulators or insurance providers to show that a company was responsibly managing its agreed-upon business risks.
4. Implement AI-Powered Threat Detection: Companies should use automation and AI technologies for advanced threat detection and response capabilities. Although this is an emerging market, there are reliable solutions for SOAR and for the detection of potential financial fraud. These tools can automate portions of the threat investigation process, recommend actions to speed response, and help identify high-risk transactions with minimal user friction. This proactive approach helps in identifying and mitigating cyber threats.
5. Prioritize Regular Security Training and Awareness: Finally, companies should conduct regular, role-appropriate training sessions to educate employees about AI-related threats and modern practices for cybersecurity. Training employees to identify AI-enhanced phishing attempts and social engineering tactics is important, as these may be more convincing because of AI. Companies should also provide guidelines on the secure use of AI tools within the organization to prevent accidental data leaks or security breaches. This helps in fostering a culture of security awareness and vigilance.
You are a person of enormous influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)
For several years, I’ve been advocating for cybersecurity professionals to speak at middle schools across the United States. While we often focus on high-school seniors, giving younger students exposure to cybersecurity will probably diversify the overall talent pool of students who will consider a cybersecurity career. This also will help professionals improve their presentation skills, as middle schools are one of the hardest but most rewarding audiences.
How can our readers further follow your work online?
I’m active on LinkedIn https://www.linkedin.com/in/kaynemcgladrey/, and the team at Hyperproof helps me to produce and distribute three podcasts at https://www.youtube.com/@hyperproof/videos.
This was very inspiring and informative. Thank you so much for the time you spent with this interview!