Not all hacking is fun. A lot of repetitive manual work is usually required to map the target infrastructure and decide which assets are worthy of giving attention to first.
Why we built BountyMachine
The easiest to automate findings can have a huge impact
According to a post on Snyk’s blog where they analyzed the top 50 data breaches of 2016, the top two vulnerabilities (causing a total of 44% of the breaches) were A9-Known Vulnerable Components and A5-Security Misconfiguration. Ranking in at number 4 (responsible for 6% of the breaches) was A2-Weak Authentication and Session Management and A6-Sensitive Data Exposure. That is a total of 50% caused by vulnerabilities which could have been mitigated if the affected organizations had been monitoring their infrastructure.
Building everything from scratch is a bad idea
There are so many existing attack techniques, with new ones being discovered and shared all the time. In such an active industry, it can be hard to keep up with the pace of change. Imagine trying to build and maintain a tool on your own that encapsulates all of these techniques quickly and effectively, as soon as they come out. That’s pretty much the state of things today. There is so much fragmentation and ‘not invented here’ syndrome among open source tooling that we end up with a pile of duplicates that are only slightly different from one another, and many of which fall to the wayside of code rot and being abandoned.
By spreading out our focus across this spectrum of clone-tools, we find that more bugs slip through, and we end up with a pile of ‘decently useful’ projects instead of a few standouts that work amazingly well, with a thriving community of support. Somewhere along the way, we forgot about the Unix philosophy for our tooling: doing one thing, doing it well, and making it easy to connect that into other tools and workflows.
We need to do security at scale
In this age, most organizations have a ton of assets, and more and more we are finding that they just aren’t able to effectively keep track of them all. This becomes a serious problem for security, as we know: what you don’t know exists, you can’t protect. One of our priorities when building BM was to ensure that we can handle the kind of scope posed by the asset sprawl that organizations have to deal with. We designed BM with the same architectures and technologies that allow the biggest companies in the world to keep their IT assets running at scale. Leveraging the learnings of these giants, and building upon the best the open source world has to offer, BM is resource friendly, yet able to scale up and meet the demands posed by large scopes and complex tests.
Monitoring is important
Times are changing. It’s now common for code to be pushed multiple times in a day. New assets can pop up overnight as business needs change. When you don’t have an effective way to keep track of these assets, they become your organization’s weak link. Your security team becomes blind, and as each new security update, public exploit, or attack technique becomes available, your unknown assets turn into your compromise. Or to put it another way, bad things will happen.
Building a BountyMachine
Our main objectives were to:
1. Ensure it is quick and easy to integrate new tools
2. Be able to handle the scale of modern organizations, handling any number of assets
To achieve the first objective we designed BM around a loosely coupled ‘modular’ architecture; new tools and techniques can be integrated in minutes, and without having to change AllTheThings(TM) to do so:
1. Provide BM with the tool’s docker image and configure the arguments you want.
2. Implement a little logic to parse, diff and store the output¹.
3. Specify how you want to be notified of the results (optional).
For the second goal, we chose to build on the backbone of technologies that have been supporting the development world for a long (by internet standards) time, including:
1. As you might have guessed, we use Docker as the encapsulation layer to support both the tools we use and the components that make up BM’s own architecture. Docker makes an excellent platform where you can wrap all of the dependencies and configuration each tool needs together, without impacting on other tools. Many open source security tools already have ready-to-use docker images, so integrating these tools is even quicker!
2. Kubernetes allows us to automate the deployment and scaling of the infrastructure, allowing the system to scale depending on the demands placed on it by the assets BM is handling at any point in time. This allows us to process things quickly and effectively when we need to but keeps costs down when we don’t need that power.
3. Sometimes just a single tool doesn’t give us the power or flexibility we want. When building a multi-step workflow (multiple tools running one after the other in a specific order), we make use of Argo, which allows us to model these workflows as YAML files. Argo is a great new project with a really good team maintaining and developing it.
How BountyMachine works
- You give BM a target. We have an API you can use to supply targets so you can use any input method you like as long as it can issue simple HTTP requests. Our current personal favorite is Slack Slash Commands.
2. The API drops the target in a queue, where a workflow worker picks it up. Each data type has its own queue. This worker runs the Argo workflow (one tool or a chain of tools) that is designated to handle that specific data type and then drops the output into another queue.
3. A diff worker picks up the output that was generated by the workflow worker. Then, it checks the database to check if the workflow has been run on this data before. If it has, it diffs the current output against the old output in the database, updates the database, and drops the new data in a diff queue.
4. A notification worker listens on the diff queue and converts the diff data into one or more user-friendly formats. It is useful to have more than one output format because each output method works best when its input is formatted in the most reasonable way. For example, if you decide to send SMS notifications, you don’t want to dump all of the 9342 subdomains that you found in the message; it’d be more useful to send just some stats and direct the user to where they can get all of that data. On the other hand, when you’re sending to slack, it’s okay to send everything in one chunk.
5. A Kubernetes Cron Job starts the whole process again on schedule.
There are still a lot of things that we can do, and we are actively working to make this project even more useful. Stay tuned for more posts!
Note: We are giving a talk at Recon Village about BountyMachine. Make sure to check it out if you’re in Vegas, or watch the video if you don’t catch it.
 We have made this process easy by abstracting most of the needed code, but that’s a story for another time.