The Burn Notice, Part 1/5 — Revealing Shadow Copilots

How We Extracted Financial Data From Many Multi-Billion-Dollar Companies

Dor Attias
10 min readFeb 24, 2025

--

Burn Notice — The announcement made within an intelligence agency when an agent is compromised or deemed untrustworthy

TL;DR

In the first episode of our series, we’ll reveal how shockingly easy it was to extract financial data from multiple multi-billion-dollar companies. We’ll dive into Copilot Studio, Microsoft’s low-code/no-code platform for building AI agents, and uncover a technique that could allow threat actors to identify exposed agents

Intro

You’ve likely heard of AI Agents.

AI agents aren’t just another tech breakthrough — they’re a game-changer. While some compare their impact to the cloud revolution, we believe this is even bigger. This is the next Industrial Revolution. Just as steam engines reshaped economies, AI agents will redefine how we work, build, and innovate. The world is about to change — permanently.

In this “Burn Notice” series, we’ll explore why this new technology is unlike anything we’ve seen before and why it’s crucial for everyone to be aware of the emerging risks associated with it.

What Exactly Are AI Agents?

AI agents are the next step in the evolution of LLMs, designed to tackle complex tasks autonomously. By combining domain-specific memory with the ability to perform actions such as sending emails, browsing the web, creating Jira tickets, executing code, and even making purchases — these agents go beyond simple text generation. At the core of their intelligence is a reasoning model (LLM), serving as their ‘brain’, allowing them to reason, understand, and generate context-sensitive responses.

AI Agent Basic Architecture

‘SaaS is dead’ — Microsoft is Bullish on AI Agents

Microsoft is doubling down on AI Agents, with CEO Satya Nadella declaring ‘SaaS is dead’, alongside an $80 billion investment in AI data centers for fiscal 2025. Microsoft (and we) believe it’s the beginning of a new era where AI agents take center stage. To back this up, Microsoft is rolling out a growing ecosystem of AI platforms, all designed to make building and scaling AI applications more accessible.

The following is a partial list of Microsoft products and platforms introduced for the AI agent era — AutoGen , Magentic-One , Semantic Kernel , Azure AI Agent Framework, Copilot Studio

Microsoft AI Agents Products

Copilot Studio

Microsoft Copilot Studio is a low-code/no-code AI platform launched by Microsoft in November 2023. It lets users build AI agents, known as copilots, within the Microsoft 365 suite. These agents can be tailored to fit different business needs and workflows, helping teams work more efficiently. According to Microsoft, 100,000 organization have used Copilot-Studio by October 2024

100K Copilot Studio agents already in action

Building an AI agent using Copilot Studio is as easy as it gets, with just three simple steps: adding a knowledge base, selecting tools, and publishing.

To start, users add knowledge-base by choosing their data sources, which can include uploaded files, SharePoint links, Salesforce, or websites. Next, they select tools from a predefined list, such as sending emails, creating Jira tickets, or calling a custom API. Finally, users publish the agent by choosing the publishing channel and setting the authentication level. And that’s pretty much it — you’ve got yourself an AI agent

In Copilot Studio, you don’t get to choose the LLM (Large Language Model), and Microsoft hasn’t fully disclosed which model is used under the hood. By default, these agents are designed for chat-based interactions, but they can also be triggered by events. For example, when a new email arrives, the agent can extract its content and process it as if it were a prompt from a chat interaction.

Publishing Agents

As mentioned earlier, once you’ve built your agent, you can publish it across different channels and platforms. Along with choosing the publishing channel, you’ll also need to set the authentication level.

Until recently, the default setting was ‘No Authentication’.
We have to give credit to Microsoft for updating this, now adding warning signals when ‘No Authentication’ is selected.

But will this change actually make a difference? Only time will tell.

Publishing Warning Messages

The Breadcrumb

No matter which channel you use to publish an agent, Microsoft automatically generates a “Demo Website” link. This page is intended for testing and provides a way to interact with the agent through chat. Regardless of whether you deploy the agent via Teams, Slack, or another platform, this link will always be available.

The “Demo Website” link is built using three key components — environment ID, schema name, and agent name — which we’ll dive into in the next section.
https://copilotstudio.microsoft.com/environments/Default-[ENV_ID]/bots/[SCHEMA_NAME]_[AGENT_NAME]/webchat?__version__=2

While monitoring the browser requests during chat interactions, we noticed another URL that returns a JSON response containing the agent details. The link and the response look like this https://[ENV_ID].environment.api.powerplatform.com/powervirtualagents/botsbyschema/[SCHEMA_NAME]_[AGENT_NAME]/canvassettings?api-version=2022-03-01-preview

{
"botCanvasSettings": {
"botId": "d0524493-....",
"botName": "Agent Test",
"tenantId": "efbf806c-...."
},
"botPageSettings": {
"conversationStarters": [
""
],
"personalizedMessage": ""
}
}

The attacker perspective

As researchers, we understand that mistakes are inevitable. While it’s possible to enforce authentication for agents, we believe that, despite best efforts, some agents will likely remain unauthenticated in every organization. And surely, some of these will not only be unauthenticated but also connected to internal knowledge sources, which makes the risk even higher.

Think about it — there is a correct way to configure an S3 bucket, yet we still see misconfigured buckets exposed to the public with overly permissive permissions, even in large enterprises. The same logic applies to agents.

So how can attackers find these misconfigured agents?

We know that, regardless of the publishing method — whether via Teams, Slack, or another channel — each published agent has a structured link (Demo Website link) that can be interacted with.
We also know that this link is built from three key components: the environment ID, schema name, and agent name.

This means attackers could systematically search for exposed agents.

In general, an attacker would need to know some of these parameters and guess the others to create a ‘trial’ link. By sending a request to the trial link and analyzing the response, the attacker can determine if the guess was correct. Repeating this process could eventually lead to the discovery of an exposed agent.

This approach isn’t new; it’s similar to techniques like subdomain enumeration or brute-forcing websites to uncover hidden files, such as configuration files and backups.

So, is it possible for threat actors to identify some of these three parameters?

AI Agents Enumeration

Environment ID

There is a .well-known method for obtaining a Tenant ID from a domain. The process is simple: send a single HTTP request to https://login.microsoftonline.com/[DOMAIN]/.well-known/openid-configuration and extract the Tenant ID from the JSON response. This ID appears in multiple fields, such as the token_endpoint field.

Extracting tenant id from domain

From there, the Environment ID is directly derived from the Tenant ID

Environment ID from Tenant ID

Schema Name

The schema name starts with cr followed by a hex number, ranging from 0x000 to 0xfff, giving a total of 4096 possible schema names. As of the time of writing this blog, we haven’t found an efficient way to determine if a schema exists, which makes the enumeration a bit trickier (although 4096 isn’t that big of a number).

However, if you manage to find one legitimate copilot agent, you can easily extract its schema and enumerate it to discover other agents that weren’t meant to be exposed.

Agent Name — Achilles’ Heel

The main issue with Microsoft’s generated link is that it uses the agent’s name instead of a unique identifier like a UUID.
This turns enumeration from basically impossible task into a much more manageable one.

For comparison, a UUID (128 bits) has 340 undecillion (2¹²⁸) possible combinations. Trying to guess a UUID through enumeration would take far longer than the entire age of the universe. On the other hand, guessing names is significantly easier. Attackers can use dictionaries, and as you’d expect, many companies end up using the same agent names.

For example, in different companies we spoke with, the Finance Agent had the same name — “Finance Agent.” Shocking.

How do you know when you’ve hit the jackpot?

There are three possible responses to an enumeration link request.

  1. Agent does not exist
{
"demoWebsiteErrorCode": "404"
}

2. Agent exist, but authentication is required

{
"demoWebsiteErrorCode": "401"
}

3. Agent exists, and authentication isn’t required (Jackpot!)

{
"botCanvasSettings": {
"botId": "f09f...",
"botName": "Agent Name...",
"tenantId": "efbf....."
}
}

An enumeration scan would look like this

Enumeration scan

You’ve found an exposed agent, now what?

From an attacker’s perspective, once you’ve found an exposed agent, what can you actually do with it? Can you figure out what kind of data it has access to? What tools can it trigger? And wait, isn’t there any security built into this thing?

Knowledge Oracle

To unlock an agent’s full potential, most are connected to a knowledge base, such as an organization’s SharePoint, Excel sheets, and so on.
This got us thinking — is it possible to identify what kind of data is linked to these agents?
We discovered that certain queries made to an Agent can help us understand which data is connected to it

Knowledge Oracle

At this point, you might be wondering — why does this even matter? Isn’t the product’s built-in security enough to protect me?

It’s true that Copilot Studio agents come with built-in guardrails and content filtering, which should ensure that the agent is doing only what it should. However, no security system is perfect — especially non-deterministic ones — so there will always be ways to bypass these protections. In fact, we managed to do just that.

While we won’t dive into the details in this blog, we’ll be exploring our approach in an upcoming post.

In the meantime, let us show you the real impact of a misconfigured agent.

In The Wild

Imagine this scenario: You’re a finance professional juggling tons of data — Excel files, PDFs, and countless other sources. You’ve just discovered that you can save hours of tedious work by creating a Finance AI Agent. You feed all your data sources into the agent, and instead of manually cross-referencing and sorting through endless information, you simply ask the agent questions, and it does the work for you. Sounds like a dream, right?

Now, you’re excited to share this amazing tool with your team. To do so, you remove the authentication level to “no authentication” but forget to set it back.

This is exactly what we’ve discovered in real-life scenarios. We’ve identified exposed AI Agents in several multi-billion-dollar companies and successfully extracted sensitive financial data. The following is just one example among many.

Real life findings

It’s pretty obvious this agent wasn’t meant to be out in the wild, but hey, even the most careful people leave their keys in the door sometimes.

Note: We contacted the company in question and provided all the necessary information to resolve the issue.

Conclusion

AI agents are a game-changing technology that promises to transform how we work, bringing major benefits to both organizations and employees. However, with this innovation come new risks and threats — some of which are straightforward and naive, like the unintended exposed agents we discussed today.

Other threats, however, are much more advanced and differ from the attacks we’re familiar with today. In the upcoming posts, we’ll dig deeper into these new types of attacks and threats, as well as uncover vulnerabilities we’ve identified in major AI agent platforms used by large enterprises globally.

If you think Copilot Studio is the only platform with issues, stay tuned for our upcoming episodes!

Call To Action

  • Review all agents across all platforms, including Copilot Studio.
  • Ensure proper configuration for each agent.
  • Enable authentication for agents that should not be exposed.
  • Verify that agents meant to be exposed are not connected to overly sensitive data.
  • We’ve built a tool that lets you scan your organization Copilot Studio from the outside. Check it out!
    http://uncoveragent.com

Next episode

--

--