How to Supervise Your AI with the Red Team

Anand Tamboli®
Mar 19 · 9 min read

How do you stop an attacker? By thinking and planning like one!

We have supervised machine learning as a concept, but supervising an AI is not deeply thought out. Let us focus on ongoing controls over the AI solution and see what it involves supervising an AI and elaborate mainly on the concept of a Red Team in the context of AI solutions.

What is a red team?

In the military parlance, “Red Team” is the team, which uses its skills to imitate the attack and techniques that potential enemies might use. The other group, which will act as a defense, is called a “Blue Team.”

Cybersecurity has picked up these terms for use, where they signify the same functionality.

A red team is an independent group of people that challenges an organization to improve its defenses and effectiveness by acting as an adversary or attacker. In layman’s terms, it is a form of ethical hacking, a way to test how well an organization would do in the face of a real attack.

If effectively implemented, red teams can expose vulnerabilities and risks in your technology infrastructure and people and physical assets.

You would often hear a saying that “the best offense is a good defense.” Having a red team is the right step in putting up this good defense. Whether it is a complex AI solution or merely a basic technology solution, red teaming can give a competitive edge.

A comprehensive red team exercise may involve penetration testing (also known as — pen-testing), social engineering, and physical intrusion. Either all of these could be carried out, or a combination of those in the right manner as the red team would see fit to expose vulnerabilities.

If there exists a red team, then there also must exist a blue team. This assumption stems from the premise that system developments are done in-house only. However, this can change with AI systems, and your actual blue team, maybe your technology vendor.

Building your red team

Ideally, a red team needs at least two people to be effective, though many range from two to five. If you are a large company, you might need 15+ members in the team to work on several fronts.

Depending upon the type of AI you are deploying, your red team’s composition might change with different skill sets. The structure is necessary to maximize the team’s effectiveness.

Typically, you would need physical security experts, such as the ones who understand and can deal with physical locks, door codes, and other aspects. You would also need a social engineer to phish out information through emails, phone calls, social media, and other options. Most importantly, you would need a technology expert, preferably a full stack one, to exploit hardware and software aspects of your system. These skillset requirements are the minimum. If your application and operations are too complicated, it will make sense to hire specialists for individual elements and have a mixed team of experts.

Top 5 red team skills

The most important skill any of the red team members can have is to think as negatively as possible and remain as creative as possible when executing the job.

· Creative negative thinking: It is the core goal to continually find new tools and techniques to invade systems and eventually protect their security. Moreover, showing the systems' flaws needs a level of negative thinking to counter the inbuilt optimism in an individual.

· Profound systemwide knowledge: Red team members must have a deep understanding of computer systems, hardware, and software alike. They should also be aware of typical design patterns in use.

Additionally, this knowledge shouldn’t be limited to only computer systems. It must span across many systems to involve heterogeneous systems.

· Penetration testing (aka pen-testing): This is a common and fundamental requirement in the cybersecurity world. Moreover, for red teams, it becomes an essential part and kind of standard procedure.

· Software and hardware development: Having this skill means as a red team member, you can envisage and develop the tools required to execute your tasks.

Moreover, knowing how AI systems are designed in the first place means you are likely to know failure points. One of the critical risks AI systems always pose is “logical errors.” These errors do not break the system but make them behave in a certain way that is not intended or may cause more damages. If a red team member has experience in software and hardware development, they have likely seen all the typical logical errors. Therefore, they can exploit them to accomplish their job.

· Social engineering: This goes without saying that manipulating people into doing something that can lead the red team to their goal is essential. The people aspect is also one of the failure vectors that the actual attacker would use. Human errors and mistakes are some of the most frequent reasons for cyber problems.

In-house or outsourced

The next key question is — should you hire your team members and employ in-house or outsource the red team activity?

We all know that security is one of those aspects where funding is mostly dry. It is tough because the ROI of security initiatives cannot be proven easily. Unless something goes wrong and you prevent it, it is nearly impossible to visualize or imagine. This limitation makes it difficult to convince that the investment in security is worth doing and investing money wisely.

A quick answer to an in-house or outsourced question would be that it depends on company size and the budget.

If you have deployed AI systems for long-term objectives, then the in-house red team would be the right choice as they would be engaged continuously. However, that comes with an additional ongoing budget overhead.

On the contrary, if you are unsure about your overall outlook, outsourcing is a better way to start. This way, you can test your business case for in-house hiring in the long run.

From a privacy and stricter controls perspective, the in-house red team is highly justifiable. The red team and blue team activities are more like cat-n-mouse games. When done correctly, each round can improve and add to both the teams' skill sets, enhancing the organization‘s security.

You can utilize the outsourcing option if you are planning to run a more extensive simulation. If you need specialized help or looking for a specific skill set for particular strategy execution, it would also make sense.

Objectives of a red team

Primarily, the red team exists to break the AI system and attached processes by assuming the maleficent entity's role. The red team should go beyond just the technology aspect and work on the entire chain that involves the AI system. Doing this can make their efforts more effective as it can then ensure that upstream and downstream processes and systems are tested.

A red team should consider the full ecosystem and figure out how a determined threat actor might break it. Instead of just working towards breaking web-app or particular technology application, it should combine several attack vectors. These attack vectors could be outside the technology domain, such as social engineering or physical access if needed. It is necessary because, although your ultimate goal would be to reduce AI systems’ risks, these risks can come from many places and in many forms.

To maximize a red team’s value, you should consider a scenario and goal-based exercise.

The red team should get into motion as soon your primary machine training is complete, which applies if you develop the model in-house. If you are sourcing trained model(s) from an outside vendor(s), then the red team must be activated as soon as sourcing completes.

A red team's primary goal would be to produce or create a plausible scenario in which the current AI system behavior would be unacceptable, and if possible, catastrophically unacceptable. If the red team succeeds, you can give their scenarios to the machine training team to retrain the model. However, if the red team does not achieve, then you can be reasonably confident that the trained model will behave reliably in the real-world scenario too.

Carefully staging potentially problematic scenarios and exposing the whole AI system to those situations should be one of the critical objectives. Also, this activity need not be entirely digital in format. The red team can generate these scenarios by any means available and in any form as they seem plausible in real-life situations.

One way the red team can attempt to fail the AI system is by giving garbage inputs in the primary or feedback loop and seeing how it responds. If the system is smart, it will detect the garbage and act accordingly. However, if the system magnifies or operates on the garbage input, you will know that you have work to do. These (garbage) inputs can take the form of training inputs for machine retraining.

Red teams can also work on creating and providing synthetic inputs and see how the system responds. The output then they can use to examine the AI system’s internal mechanics. Based on further understanding, synthetic data could be made more authentic to test the system’s limits, responses, and overall behavior. Once you identify failure situations, they are easier to fix.

Red teams may not necessarily try to break the systems. Sometimes, they may merely cause a drift in the system's continuous learning by feeding the wrong inputs or modifying parameters and thereby cause it to fail much later. Refer to concept drift phenomena discussed in the earlier section. While concept drift is mostly natural and normal, it can be deliberate and manufactured.

A point where your AI system is taking input from another software or human could be a weak link. A point where an AI system's output forms an input to another API or ERP system could also be a weak link. By nature, these links are highly vulnerable spots and weak links of the whole value chain.

Every link that joins two heterogeneous systems is a weak link!

Red teams should target and identify all such weak links in the system. These weak links may exist between two software systems or at the boundary of software-hardware or software-human interaction.

A red team is not for testing defenses

The core objective of the red team is not to test your defense capabilities. It is to do anything and everything to break the functional AI system; in as many ways as possible and by thinking outside the box. Ultimately this should strengthen the whole organization in the process.

Having this broader remit can enable red teams to follow intuitive methodologies for developing a reliable and ongoing learning system for the organization. It is a promising approach to many severe problems in AI systems’ control.

However, remember that red teaming is not equivalent to a testing team that generates test cases. Test cases usually follow a well-defined failure condition(s), whereas for the red team objective is much broad, methods are undefined, and often limitless.

In a nutshell, your red team should evaluate your AI system on three key parameters:

· The severity of the consequences of a failure vector

· The probability of occurrence as found

· The likelihood of early detection of failure.

The red team is functional. What next?

A functional red team is not just about finding the holes in your AI system. It is also about providing complete guidance and a playbook to improve those weak points, plug those holes, and strengthen the system along the way continuously.

Moreover, an effective red team operation wouldn’t end after finding a weakness in the system. It is just the beginning. The next role of the red team is to provide remediation assistance and re-testing. Also, more importantly, keep doing this as long as necessary.

There may be significant work involved in comprehending the findings, their impact, likelihood, criticality, and detectability. Furthermore, carrying out suggested remediations, retraining of machine with new data, and whole a lot before your blue team says they’re ready for the next round of testing.

The red team's process of finding weaknesses and blue team fixing has to be an ongoing process with regular checks and balances. Avoid the temptation to do it once for the sake of it. Instead, make sure that you do it regularly and consistently. Doing so will help you maintain a watch on each aspect's risk score and monitor how you are progressing with the already established risk mitigation plan. Your target for each risk item in the list should be to reduce its risk score near zero.

You stress testing your AI system for any vulnerabilities is better than someone else exploiting it.

Note: This article is part 3 of the 12-article series on AI. The series was first published by EFY magazine last year and now also available on my website at https://www.anandtamboli.com/insights.

tomorrow++

It is time we started thinking beyond tomorrow…

Anand Tamboli®

Written by

Inspiring and enabling people for a sustainable and better future • Award-winning Author • Global Speaker • Futurist ⋆ https://www.anandtamboli.com

tomorrow++

It is time we started thinking beyond tomorrow…

Anand Tamboli®

Written by

Inspiring and enabling people for a sustainable and better future • Award-winning Author • Global Speaker • Futurist ⋆ https://www.anandtamboli.com

tomorrow++

It is time we started thinking beyond tomorrow…

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store