Emily Burrows Of Bass, Berry & Sims On Cybersecurity Compliance in the Age of AI Threats

An Interview With David Leichner

David Leichner, CMO at Cybellum
Authority Magazine
10 min readOct 20, 2024

--

Governance Program: Be proactive. As mentioned above, organizations need to know what AI is being used internally in order to protect against threats associated with such use. A robust governance program can help identify potential risk areas and ensure that appropriate members of the organization are looped in. This helps the organization focus its efforts on AI that may result in potential vulnerabilities.

Compliance with regulatory standards and industry-specific guidelines for product security is an indispensable part of cybersecurity. In an age where malicious AI poses a significant threat, how do organizations ensure their product security strategies are not just effective, but also fully compliant? As a part of this series, I had the pleasure of interviewing Emily Burrows.

Emily Burrows is a member of Bass, Berry & Sims, PLC who advises companies on a wide-range of business and technology-related transactions, including the intellectual property components of commercial transactions involving the development, transfer and licensing of technology assets. Emily leads the firm’s Artificial Intelligence team, with particular emphasis on negotiating agreements for the license and implementation of new AI tools and services, advising on the unique contractual and regulatory considerations at hand and crafting appropriate terms to allocate risk.

Thank you so much for joining us in this interview series! Before we dig in, our readers would like to get to know you. Can you tell us a bit about how you grew up?

I grew up in Birmingham, Alabama, and my parents are both scientists. They emphasized learning and reading and being curious about the world around me. I remember my dad bringing home a computer when I was in elementary school and it being the most amazing thing. I grew up on Oregon Trail and Mavis Beacon typing software. In high school, I used up the family’s free AOL internet CDs until we finally got cable internet, which was an exciting day! Thankfully, social media as we know it was not around until I was in college, but my alma mater, Vanderbilt, was one of the early adopters.

Is there a particular story that inspired you to pursue a career in cybersecurity? We’d love to hear it.

Not a particular story, but it did happen organically. I originally went to law school to become an entertainment attorney. I worked with wonderful solo practitioners while I was in law school but decided that the practice was not for me. But, I loved intellectual property law and knew I wanted to be a transactional attorney. After moving back to Nashville from Austin, Texas, to join my current firm, Bass, Berry & Sims as an associate in the IP & Technology practice group, I worked on several data breach responses. Data breaches are obviously an incredibly stressful situation for our clients, so it was humbling to work with people to mitigate the damage from the breach and assess potential liability. Then, in 2023, I was approached by firm leadership to lead the firm’s AI subgroup, which is a cross-disciplinary group with members from nearly every practice group in our firm, including IP, privacy, healthcare, corporate, litigation, labor and employment and tax. Because I already had been working with AI clients for years at that point, I was excited for the new challenge.

Can you share the most interesting story that happened to you since you began this fascinating career?

Working in law related to AI has given me a chance to hear so many interesting stories as we listen to clients, colleagues and leaders in the field talk about their experiences, concerns and successes. We are all figuring out how to best use AI safely, creatively and compliantly, so these conversations are invaluable.

You are a successful leader. Which three character traits do you think were most instrumental to your success? Can you please share a story or example for each?

Open-Minded: Throughout my life, but particularly as I have stepped into the AI space, I have been open-minded and willing to learn. AI is such a fast-paced and emerging area, and I had to recognize early on that I did not know it all. This fast-moving area makes it crucial that we are more than willing to learn from people, particularly those with expertise and experience in IT and AI development.

Flexible: Because I started law school in 2008, right when the recession began, finding a job after graduation was difficult, but I eventually ended up where I hoped to be. It just took a little longer than expected! Assuming the AI leadership role has changed my practice and priorities. Of course, the typical law firm emphasis on quality work and billing does not change, but rather than focusing on just the day-to-day work, I have shifted to spending numerous hours learning about and staying up-to-date on AI and how to help our clients as they adopt and develop AI.

Driven: Becoming a law firm partner is not easy, but I spent a lot of hours, lost a lot of sleep, and cancelled a lot of plans (apologies to my friends and family) to get where I am. I do not regret it and am happy where I ended up, but I definitely should have slept more!

Are you working on any exciting new projects now? How do you think that will help people?

I have been focusing on AI-specific laws that take effect over the coming months and years. Each law applies to particular types of AI, industries and use cases. We are helping our clients plan ahead so that they can be ahead of the game in leveraging the technology and avoiding some of its well-known risks.

Ok super. Thank you for all that. Let’s now shift to the main focus of our interview. How does the emergence of malicious AI threats impact compliance requirements for organizations? Are there specific regulations or standards that address the unique challenges posed by AI-related security threats?

The emergence of malicious AI threats puts additional pressure on compliance requirements because organizations are expected to be aware of and take reasonable steps designed to protect against these threats. As the threats increase, so do the obligations.

Data privacy laws in particular address AI-related security threats, even if AI is not expressly written into the law. These laws can apply not only to personal data collected from customers, but also personal data regarding employees. Certain privacy laws, such as the GDPR, include security requirements that should be implemented to protect personal data.

Can you provide an example of a compliance framework or approach that organizations can adopt to effectively address security concerns arising from malicious AI? How does this framework help organizations mitigate risks and stay compliant?

The National Institute of Standards and Technology (NIST) AI Risk Management Framework (the AI RMF) is a useful resource to help organizations that use and deploy AI, manage and mitigate risk. The AI RMF is voluntary, but it serves a helpful guide for organizations by first framing the risks associated with AI and then describing functions to help organizations address the risks in practice. NIST categorizes those functions as Govern, Map, Measure and Manage. NIST has also provided an AI RMF Playbook, which includes actionable policies and procedures for implementing the frameworks described in the AI RMF.

NIST also recently released a generative AI specific framework (NIST AI 600–1).

For people in the InfoSec field, the AI RMF will feel familiar to NIST’s Cybersecurity Framework in terms of structure and implementation, which tends to help with adoption.

In the context of compliance and regulatory requirements, what are the key considerations for organizations when deploying AI systems? How can organizations ensure that their AI deployments align with relevant compliance standards and guidelines?

I recommend implementing AI governance processes. The complexity can vary depending on whether the organization is using AI for only lower risk endeavors vs. developing and deploying higher risk AI. Governance helps organizations enforce their priorities and operationalize and implement standard requirements relating to the development and use of AI solutions. This includes policies on compliance, industry-best practices, privacy, security, intellectual property protection, and data ownership.

Many organizations form AI-focused committees that include a cross-functional team, including legal, risk/compliance, IT/InfoSec, communications and marketing, operations and finance. Having these different voices helps to identify blind spots and use cases into which compliance and legal teams may not have visibility. However, there should also be senior leadership oversight and accountability to ensure that someone is looking at the big picture strategy and is ultimately responsible for the organization’s AI deployment.

Organizations should map what AI is currently being used internally, for what purpose it is being used, and what data is being processed through the AI tool. They should also implement impact assessments in the procurement process to ensure that the risks from particular use cases are thought through and mitigation strategies are documented. The AI committee can use these impact assessments to establish a risk rating based on potential concerns with security, data usage, liability, compliance, and other factors and, ultimately, decide whether the use is approved. For higher risk AI, the committee should revisit the use periodically and monitor for new risks, bias and validity in the model, effectiveness, and safety.

I also suggest developing an AI usage policy or updating current policies to address AI deployment. Employees are using AI, so it’s important to communicate what usage is acceptable and what guardrails should be put in place. Policies can be done enterprise wide or by business unit. Marketing and IT have very different use cases and areas of risk, so a one-size fits all policy may not be effective for the particular organization.

Are there any specific compliance challenges that organizations commonly face when dealing with malicious AI threats? How can these challenges be overcome, and what steps can organizations take to enhance their compliance efforts in this area?

Data privacy compliance is of course a big concern when dealing with malicious AI threats. If an organization has not complied with its obligations under privacy laws applicable to its jurisdiction or business, a security incident caused by a malicious AI attack could lead to governmental fines and penalties, claims from affected individuals, reputational losses and other liabilities.

Organizations should build out their data privacy programs, which works hand-in-hand with cybersecurity and compliance efforts. A compliant privacy program may not be able to fully prevent every security incident, but an organization will be in a much better place to, first, discover and remediate the incident and, second, defend itself against claims resulting from the security incident. Regulators also tend to look more kindly upon organizations that had appropriate processes in place over organizations that willingly or unwittingly did not comply with the privacy laws applicable to them.

Collaboration between compliance teams and cybersecurity professionals is crucial in ensuring effective security measures against malicious AI. How can organizations foster collaboration between these two teams to address AI-related threats while maintaining compliance with relevant regulations?

This goes back to the need for a cross-functional AI committee. The compliance and cybersecurity teams should be involved in any decision regarding the use of AI and should be part of continuous monitoring of AI, particularly in high-risk cases.

What are your “5 Things We Must Do To Protect From AI-Powered Cyberattacks” and why?

1 . Governance Program: Be proactive. As mentioned above, organizations need to know what AI is being used internally in order to protect against threats associated with such use. A robust governance program can help identify potential risk areas and ensure that appropriate members of the organization are looped in. This helps the organization focus its efforts on AI that may result in potential vulnerabilities.

2. Vendor Management: Most organizations have a formal process for vendor management. AI should be added to these processes, such as incorporating AI questions into the existing vendor security and diligence questionnaire. Adding to existing forms, rather than having a separate AI form, tends to streamline the process and reduce pushback from vendors. Many cyberattacks come through vendors, so it is critical to properly diligence vendors.

3. Use AI to Stop AI: AI can be used to help stop AI cyberattacks through processes such as real-time threat detection, fraud detection, incident prediction, and monitoring on a 24/7 basis. Humans should be in the loop, but AI can handle a lot of these processes more quickly and effectively than a person.

4. Follow Privacy Laws: As mentioned above, a strong data privacy program is a major first line of defense against AI-powered cyberattacks. For example, privacy laws focus on data minimization, which can help reduce the impact of an AI cyberattack. If the organization has less personal data on-hand, there is less potential liability risk if an incident does occur.

5. Training: Training is important so people know what they can and cannot do. This also helps people use the tools more effectively. Depending on the product, periodic training may be appropriate. As cybersecurity experts know, some people will still click on a phishing email no matter what, but appropriate training will help reduce some of the risk.

You are a person of enormous influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. :-)

Everyone I have spoken to or heard from since I started focusing on AI has really seemed to have people’s best interests at heart, whether it is protecting their employees from job displacement, ensuring privacy of identifiable information, avoiding bias, shielding against bad actors and national security issues and other risks associated with AI. On the flip side of these threat-based discussions, I love hearing people focus on the positives with AI — whether it’s the potential for early diagnosis and prevention of diseases or simply taking away mundane tasks that no one wants to deal with. This has been heartening given some of the doom and gloom associated with AI.

I would just encourage people to not forget that as AI becomes more commonplace and more lucrative. We have a responsibility as experts in the field to do our best to shepherd the use of AI in a safe, productive and beneficial way.

How can our readers further follow your work online?

You can follow me on LinkedIn. We also post AI-related updates, news and insights on the Bass, Berry & Sims Artificial Intelligence page.

This was very inspiring and informative. Thank you so much for the time you spent with this interview!

--

--

Authority Magazine
Authority Magazine

Published in Authority Magazine

In-depth Interviews with Authorities in Business, Pop Culture, Wellness, Social Impact, and Tech. We use interviews to draw out stories that are both empowering and actionable.

David Leichner, CMO at Cybellum
David Leichner, CMO at Cybellum

Written by David Leichner, CMO at Cybellum

David Leichner is a veteran of the high-tech industry with significant experience in the areas of cyber and security, enterprise software and communications

No responses yet