Building a Product Security Team

Ryan McGeehan
Starting Up Security
12 min readNov 20, 2015


Your company writes software. When does it make sense to build a Product Security team? What types of “security person” do you hire first? What do they work on?

Lets discuss the types of employees, recruiting, and programs that a Product Security team will be built from. More importantly — the point at which they should exist at a startup or more established company.

Important: Security is more of a concept than an organization. Similarly, you wouldn’t build out a “scale” team either, since it’s everyone’s responsibility to build for scale. Don’t expect this to be the team that quietly eliminates the risks without involvement from all of engineering.


Here we will discuss the different archetypes you will hire into a Product Security program. These are archetypes that rarely fit perfectly, and the better the candidate, the more of these roles they’ll be likely to fill.


Leadership pushes security into the mainline culture, which is reflected in your codebase. Security bugs are not found unless actively sought out — it is hard work to institutionalize this type of scrutiny. A crucial aspect of leadership is to keep the shame game non-existent and keep product engineers coming to security for sage advice. Respect and reputation are crucial to staying connected with product teams.

Lastly, Leadership becomes the historians of the team and understands the big mistakes made in the past, and use these lessons to promote defenses in the future. When the team goes into incident response mode this role should be the calm champion that keeps things cool.

Recruiting: Strict leadership skills here are just as important in this role as any other. Being a team that risks exclusion by engineers, you need to plan for someone who can bring disparate technical teams together well. Technical mentorship potential is good, but that can come from non-leadership roles too. The leader who acts as a hyper-connected, culture establishing figure in the company is more valuable, especially if there are not well established norms around security.


Holding impressive communication skills, this role is proactively invited to join discussion by product teams. They can appear very similar to your leadership roles in skillset, except useful on a project level as opposed to driving the whole culture. They’re able to give very well accepted critical feedback and are well versed in explaining vulnerability nuances and approaches to a fix. Disparate engineering teams will include them early in their development process, and the Consultant can easily make their way into projects that forgot to involve security. They are a “face” for the team when security needs to make a good impression across the organization. This shouldn’t be undervalued, especially as companies become larger and more political.

Recruiting: Strong employees like this are common among those who have done security in short-term consultant gigs for years on end, interacting with all types of negative personalities on audit findings. This becomes a tradeoff, meaning they may be weaker with skillsets to own the follow up projects over a long period of time, or might have inexperience implementing the solutions they’ll frequently recommend. A good consultant understands product goals, and can find compromise that ultimately mitigates risks without going overboard. Do not fill out the team with this role, keep it a minority ratio.


The true software engineer with long term plans to eliminate risk. The type who can build a Brakeman, or create CSP, or an immune system. They could fix one off bugs, but they shouldn’t. They are better suited to work on systemic vulnerabilities. For instance, if product engineers are working with disparate authentication platforms (inconsistent or useless logs, password storage, protocols)— unifying them underneath. Big projects and big wins.

Recruiting: This sort of approach requires a very strategic thinker to tackle the highest value mitigations over time. If easily distracted, they’ll never pay off. Seek out a track record of shipping complex code completed over the long term, where big problems are targeted. Security experience is actually pretty irrelevant here if they can lean on their team, but of course is certainly helpful. Hire this until it’s a majority role.


An adversarial mind who is optimized to violate your expectations of security. Can show you if something is resilient to an attack or not. These sorts of roles frequently trade off their efforts to develop long term defenses for a breadth of knowledge of attack techniques. You’ll ask “is this secure?” to this person frequently.

The most entry level breaker may not even be able to write code at all and could still surprise a seasoned software engineer with an exploit. This doesn’t mean you should hire them — sustainable ties with an engineering organization are better held by comparable engineers.

Note: Some companies explicitly punt all “breaking” to external consultants, in an attempt to maintain a stricter engineering culture.

Recruiting: You do not want to hire the magician who impresses you with blackhat tricks and a knowledge of a few exploit tools. You want to find that individual who can thoroughly understand a new and complex system and then scare you with its potential. They should wield their knowledge to discover unpredictable and risky outcomes. Bug bounty programs have yielded some amazing candidates in my experience when you find a culture fit.


This engineer lives for troubleshooting critical bugs, owning the commit, and shepherding the short term fix. They value their volume of commits. They’re all over the codebase and become that broad knowledge base of how each bit of infrastructure works with the next. Of the good engineers I’ve known in roles like this, they’re pretty unhappy with the state of code they work with and draw their motivation from this lack of satisfaction.

The answer here is this role should be encouraged into every engineer on every team.

Recruiting: Hiring dedicated ‘fixer’ role is not worth it unless the person is useful in other categories as well — otherwise they’ll burn out. It’s irresponsible to hire exclusively for this role, a desire (or symptom) of not wanting to institutionalize “fixing” as it relates to security and only respond to the frequent issues that come up. Usually this role emerges naturally in an engineering org, it’s rarely sought out.


You may have a few extra-significant risk areas. It may be privacy from significant amounts of user content. Maybe it’s crypto, if you’re a bitcoin company. Malware research, or marketplace abuse, or credit card fraud. Having a couple people serve as in house authorities on a specialized risk will be useful. I’m sure Uber was happy to get the Jeep hackers for this specific reason.

Recruiting: It may be tough to interview for a specialist role if you can’t call them out on bullshit. They’ll know more than you on the subject matter, by definition. Consider bringing on a consultant to sit in on an interview loop. They will still need to be a culture fit and be evaluated on potential output, despite their specialization. Unless there is an incredible volume of work for them, they still need to fulfill other roles. I am very cautious when hiring for a single specialist role in Product Security.


As the team grows you’ll discover the burdens of scheduling external consultant reviews with disparate products (and their follow up mitigations), dealing with vendor checklists, Sometimes talking to potential customer security teams, and managing the overall success of various programs. A rockstar PM will be able to shepherd the construction of high impact frameworks or complex refactors with disparate engineers.

Recruiting: Very traditional PM role, with the exception that you’ll want someone who has mapped security lingo to urgency so they can understand the priority of issues that come up. The various programs Product Security will run will have valuable metrics on their success, insight into where vulnerability is being created, and confidence into which investments have been reduced the greatest risks. Find someone who is data driven and can shepherd engineering projects. Don’t hire this until there are programs to manage.


You may have a disproportionate ratio of mainstream product engineers compared to folks working on Product Security, as far as the organization goes. However, you’ll find as you beef up security consciousness all around, champions around the company will come out of the woodwork to help out. They’ll work on other teams, and that feeds collaboration. You want these allies anywhere you can get them, and they’re an intangible that makes all the difference.

Recruiting: This isn’t really a hiring exercise, but more of creating an approachable group and pushing a mission. Then help will arrive!


There are various programs you will develop and maintain as ongoing security efforts. Next section, we’ll discuss timing. All are described very categorically — most companies do not follow these models exactly. Much of this is shared or borrowed with the Collin Greene article on Modern Product Security, but I’ll be discussing this more from a team building point of view.


Most applications should have data driven opportunities to discover attacks or suspicious behavior that could lead to security improvements. Zane Lackey has a lot to say on this subject. A mindset towards instrumentation, similar to how you’d monitor for performance issues, can reveal vulnerability. Example: A spike in successful password resets may reveal exploitation leading to account takeover. Or, if a certain sensitive API is suddenly used heavily, maybe it’s a result of successful XSS. This is typical with more mature companies with larger user bases and better understandings of what “normal” is to discover anomalies.


The main aspects here are working with external researchers, connecting them with a fix internally, and paying them or recognizing them. My bias is towards using HackerOne. I write a lot about this, these cover most of my thoughts. Disclosure programs carry rewards at all stages of a company, big or small. A strong Program Manager can take submission data and always have a data driven roadmap for the team.


This program will keep an ongoing eye on repositories for suspicious commits or newly introduced vulnerabilities during active development. Git has ‘hooks’, Github has webhooks, and Phabricator has ‘Herald’, as examples of ways to insert security eyeballs into ongoing development, or reject diffs entirely (with follow up education for the engineer) with comprehensive linting. For example: If a scary evaluated code is committed, have a workflow to automatically reject the diff and mail the engineer instructions on working around it with Product Security.


It can be valuable to present to new engineers during their regular employee onboarding process, to let them know your secure coding practices and how to involve the product security team. It’s at this point you can teach the practice of good code review which will set expectations when you enforce all code to have a reviewer. When discussion on a commit gets frustrating or confusing, Product Security will have to become involved and will lean on the relationships and culture they’ve built so far, and it starts here.


There will be various automations you can employ to discover vulnerabilities. As an example, Brakeman is a tool used internally at Twitter to find Ruby bugs. Google uses its extensive computing power to beat the hell out of their products, too. Microsoft does the same with their Office product. These will create issues that need to be verified, triaged, and fixed with their respective teams. This is operational and will go on for the life of the company.


Consulting firms will gladly come in and review code for vulnerabilities. This is useful, though expensive, and never ending.

Note: Audits don’t secure anything, you do!

When I left Facebook, we had some sort of audit happening every working day of the year by one of a few third parties. It was sporadic and infrequent early on, but soon became something large enough to require constant management. This involves giving third parties access to code, locking down time to review their results, working on any remediations with them. This is mostly a logistics issue that PM’s can handle the bulk of.


There should be a routine process to decide where your largest risk areas are in the product, and project manage efforts to build protective frameworks or refactor code to eliminate them. These would be the longer term efforts that “builders” would be working on. This is usually a straight up engineering team that operates like any product team would, except with goals around security.


At Facebook we called this the “Post Mortem”. It’s incredibly useful. On an infrequent basis, maybe bi-weekly, review the highest severity incidents in the company with executive leadership. This can be downtime, outages, embarrassments, high severity bugs, breaches, you name it. Develop a rating system and stick to only including the real bad stuff. In the meeting itself, focus on constructive, root cause discussion without finger pointing or blame games in an effort to get to the truth. Facebook uses the DERP model, written with the important details on the cultural norms for a post mortem meeting.


Let’s discuss approximately when these make sense to launch these sorts of programs within your company. These projections are very rough. Launching each program earlier is better for security, but risking a failed program is the worst thing for the security organization’s overall reputation. Here’s a skeleton:


Around now the team is prototyping or building v.1 of a product. You don’t need any dedicated security team members at this point, and existing engineers should be pitching in. Code reviews should be enforced from the get-go, this will have huge payoffs later on. Commit hooks should start being made to look for dangerous evaluated functions, DIY database abstractions, leaked tokens / passwords… Make it easy for a new engineer to write secure code. New engineers should be taught about the security relevant areas of the codebase, how to review code and when to involve peers. Longer term solutions to categorical vulnerability should be kept in mind before you’re so far down the road you realize it’s a debt.


At this point you should have had external group come in and audit your product. It will parlay well into launching the product alongside a disclosure program (without bounties). This will help reduce the low hanging fruit you’ll see reported externally and help you graduate later into a bountied program with more activity.

You should start considering any high value security contributors who are interested in joining.


You should be confident enough to reward bounties in your disclosure program. There should also be enough historical data to have comprehensive commit hooks that identify bad code. Hiring should focus on “builder” roles at this point, and you should find “fixers” who have emerged and reward them greatly. If you are frequently launching stuff that needs review and feel like you rely too heavily on consultant review, it would be good to hire someone who fits the “breaker” role well, and there may be a specialist on board by now for any high risk area. You should be nurturing a leader from whoever has stepped up, or starting to look for one for your next phases.


There should be programs worth bragging about, you’ll probably have wide gaps. Bounties should be increasing, and third party auditing should be more frequent. Leadership should be bragging about the automation they’ve created that lets engineers move fast without any security burden. New engineers should know exactly how and when to involve the security team. Team members who identify with “consultant” roles should have awesome reputations and no problem approaching teams with scary projects going on. Product Engineers should be developing with platforms that Product Security has developed and supports for them.


If you’ve been working on this from the beginning, you should just keep maturing these programs and measuring reduction in risk, higher values of bounties, and happier customers. The team should have all the required roles and skills at this point and have no problem speaking publicly about their constellation of security programs.


This is a highly idealistic guide. It will be a struggle to hit all of these points. Security is sometimes thankless, but keep at it, because it matters!


I’m a security guy, former Facebook, Coinbase, and currently an advisor and consultant for a handful of startups, including HackerOne. Incident Response and security team building is generally my thing, but I’m mostly all over the place.