Architects Guide to Deploying Micro-Segmentation

Nathanael Iversen
19 min readJun 11, 2020

--

An architect or project manager for a micro-segmentation deployment often requires the greatest visibility into the desired benefits, and the clearest perspective on the true impact to the organization. At the same time, delivering those results falls across a number of other people, some of whom may not even work under the same organization within IT. What does an architect or project manager need to know about micro-segmentation deployments to stand up a new project, keep it on track, and achieve optimal results? Read on! Over the course of working on hundreds of micro-segmentation deployments, we can share the key insights that will put your team in optimal alignment to deliver the results you expect and need to deliver to the business.

This document explores the experiences and opportunities our large enterprise customers face when transitioning from traditional segmentation (such as firewalls) to micro-segmentation.

Implications of Altering the Security Model

Moving to a micro-segmentation solution alters the existing network/perimeter model in several important ways. Each of these modifications enables some of the benefits that made micro-segmentation attractive in the first place, and each of these modifications has implications for the enterprise.

The Enforcement Point Moves from Network to Host

Traditionally, security is placed at perimeter chokepoints, whether at the edge of a VLAN, the PROD environment, or to the Internet. In this model, there is little direct interaction with the application, server ops, or automation teams. The change to host-based enforcement means that:

  • Operating system mix and agent support matters
  • Availability & consistency of automation and administration tools will impact how agent deployment happens and how fast
  • Application owners, system admins, and automation developers will interact in ways that are new to them and the security team
  • Having security “inside the operating system” is new to the application/admin teams and they will need to understand what that means for them.

Security policy moves from a mixed denylist/allowlist model to a pure allowlist model

Hardware firewalls use a mix of permit and deny statements. This means that the order of rules in each device matters greatly. In the best micro-segmentation policies, there are only permit statements. Naturally, this is the practical implementation of “zero trust” principles for segmentation. But it also removes rule ordering concerns and allows for flexible multi-dimensional policies. It is a different way of working and specifying policies that will have a brief transition time as policy authors learn a new way to express their desires. It creates a much simpler policy that is easier to “read”, making audits and compliance verification much easier. Expect to spend time with those teams educating them on the new policy model.

Security policy statements move from network/IP dependent statements to metadata driven statements

Hardware firewalls depend on IP addresses, ports and protocols for rule-writing. All micro-segmentation vendors provide some type of labels or meta-data to express policy statements without any reference to network constructs. This means that the security policy will be understandable by more than just the network or network security teams. Graphical “point-and-click” mechanisms for rule-writing also provide an almost non-technical way to author security policy. When combined with powerful role-based access control (RBAC), it becomes possible to consider distributing rule-writing more broadly within the organization. Whether this is desirable or not will be organizationally specific.

Although metadata has not been historically important to the security team, the parts of the broader organization concerned with automation make heavy use of it. The confluence of security and automation teams both generating and consuming metadata implies that paying special attention to metadata design, storage, modification, etc. are necessary and worthwhile efforts that can positively affect the agility of the entire organization. This broader conversation happens best when leadership reaches across silos and workgroups and assembles the full range of constituents affected to drive a common solution.

API-driven security automation is available

While automation and orchestration are normal words on the application and system sides of IT, they have not been as common on the network and security team. But a good micro-segmentation solution provides a fully API-driven workflow. Every capability of the platform should be accessible through the API. This means that the ability to automate security is limited only by imagination, time and attention. It will again require cross-functional teamwork for the organization to understand the possibilities, prioritize the automation desires, and implement the resulting plans with appropriate phases. Time spent on metadata cleanliness and organization will pay big dividends when automating micro-segmentation policy.

In each of these observations, the common thread is that this deployment will cross internal organizational lines. It will offer capabilities that have never existed, and it will generate and consume data that is new to the team. Put most simply, this is “change”, and not “more of the same”. Each organization will have its own attitude towards change and the single largest management task is to match this change with the ability of the organization to absorb it.

Building the Deployment Team

The best approach to deploying the micro-segmentation involves assembling a cross-functional team. There are several core positions that will need to be sourced and filled. Each organization has its own convention for naming, titles, and roles, so these descriptions will be functional, rather than organizational in nature. In smaller shops, one person may perform several of these roles. In larger organizations, whole teams may represent each function.

Executive Sponsor: Has overall responsibility for the success of the project. Ensures proper status reporting and project management are in place. Removes roadblocks and facilitates cross-functional work. Maintains exec connection to the vendor’s exec sponsor.

Project Architect/Fixer: Typically the trusted “right-hand” of the exec sponsor — the person who is technical enough to interact with a tech lead at any level, yet senior enough to reach across silos and get stuff done. Typically one of the “MVPs” on the team, and the one the team looks to when it “must be done and done right”. His or her help will be invaluable at several critical points to remove roadblocks, but not required continuously.

Tech Lead: Has first admin login to the micro-segmentation solution, develops initial policy, has overall technical project responsibility.

PM: Develops deployment plan with vendor and Tech Lead to desired milestone dates. Maintains status tracking and project coordination.

Security Policy Approver: Has responsibility to confirm the initial and subsequent security policies. The tech lead will implement what this person approves or specifies.

Agent Installer: Has responsibility to deploy agents to the target infrastructure. Typically has root access to the servers in question.

Automation Lead: Has responsibility to automate agent installation, management console installation or both. May also be responsible for metadata interface or maintenance.

Network Lead: Has responsibility for existing network security controls. Often manages internal firewalls and VLANs. The “Tech Lead” often comes from this team.

Active Directory/Windows Lead: Has responsibility for AD, can create and modify user groups, able to coordinate Windows deployments through GPO or SCOM.

Your vendor should provide a matching set of resources to complement the internal team.

Account Manager: Maintains overall responsibility for customer satisfaction and project completion.

Systems Engineer: Maintains overall responsibility for customer satisfaction and technical coordination for the vendor team.

VP of Customer Success: Responsible for project completion internal to the vendor. Coordinates necessary teams and resources, Maintains reporting cadence with Exec Sponsor from the customer side.

VP of Service & Support: Responsible for contracted professional service engineers, maintenance, and support contracts. Maintains reporting cadence with Exec Sponsor from the customer side.

Solutions Architects/Office of the CTO: Assist with design, architecture and integration planning.

Professional Service Engineers: Deliver on-site assistance in all aspects of the deployment and implementation.

Project Managers: Responsible to maintain status, coordinate vendor resources to match customer plan, jointly owns the project plan with the customer PM.

During the first set of project meetings, both teams will gather and sort out which of these functional roles will need to be filled and how the communication process will flow. We next consider the best practices for communicating project status.

Checkpoints & Managing the process

In many ways, deploying micro-segmentation is just another IT project. There is technical work, political work, and coordination work just as with any project. This section will highlight the special considerations that make a micro-segmentation deployment as smooth as possible. These are the “best practices” that we have distilled from many large-scale enterprise deployments and can be adapted to any size deployment.

Ensure broad overview training for the entire project team

Get as many people as possible through the vendor’s overview training class. This grounds attendees in the architecture, policy language, and core concepts of micro-segmentation. When the entire project team attends this together, the whole design process becomes more robust. Each member of the cross-functional team listed above has a different view of IT and understands the constraints and existing architecture from a unique vantage point. When each person knows enough about deploying micro-segmentation to verbalize where complications may arise, testing or certification obtained, and other such details, it is much easier to build a complete deployment plan. The sooner these perspectives are heard in the planning process, the better. If the team are brought in one-by-one as the project progresses, many “critical path” items will not be uncovered until weeks afterwards. This early notice gives the entire organization better time to plan and react.

Expect complete design deliverables

A micro-segmentation deployment has several new concepts, as we noted above. These need to be reflected in the project plan and timeline. If the project plan has only technical milestones and misses the coordination, cross-team notifications, and other touchpoints, the project will slip several weeks by the end of the project. Every company organizes and tracks projects with local flavor, but each project should have:

  • A detailed Deployment Plan. A vendor should provide a template of the “work to be done” that contains all “lessons learned” and “best practices. This should be integrated into an actionable deployment plan, with names and dates assigned as appropriate. The plan should be signed off by the teams executing it and the exec sponsor should expect a reporting cadence on progress.
  • A labeling and metadata plan. Since micro-segmentation policy is based on names and labels rather than IP addresses, the team will need to assess current data sources and identify gaps. For ongoing operations, it may be necessary to create or enhance policy and workflow around how metadata is created and maintained as applications come and go.
  • A detailed initial security policy. Our fastest deployments have occurred where the deployment team had a clear policy objective at the outset of the project. Micro-segmentation is capable of more granularity than enterprises have had in the past. For most of our customers, micro-segmentation offers much more than is initially required. The wise project team will meet the project goals first and not get sucked into all that will eventually be possible with the platform. It is best to detail the initial policy and have the team run hard at that goal. After the workloads are all secured to the initial security policy, tightening can be done according to business needs on a risk-adjusted basis. Avoid “trying to boil the ocean”.
  • A list and description of all required automation and who is going to write it. Bulk deployment of agents, import of labels and metadata, and so many other things will likely be done via automation. Depending on which team takes the tech lead position, this work may be done by other teams. This will require extra scheduling and coordination. Knowing early exactly what is needed and by when will help each team avoid surprises and keep on track.
  • A list and description of all required integration points and who will do it. At a minimum, security and operational logs will probably be sent to a SIEM or a big-data analysis platform. Additional customization, dashboards, and other tools may be needed. Often this is an area that involves other teams than the project lead team. Your vendor will have professional service engineers with specific expertise and will need to pull the right resources for your project. It is important to figure out who is doing what work early in the project so that schedules align.
  • A list of the internal processes and workflows that will be affected by micro-segmentation deployment. Micro-segmentation changes how your organization authors security policy. If someone alters labels or metadata, that will alter the security policy applied to a workload. In theory, this is no different than requesting a firewall rule change. However, most organizations find that there are key workflows that need to be applied or extended so that existing safeguards apply. Expect the team to identify these and raise them to appropriate levels for coordination and sign-off. This is key to smooth long-term ownership of a micro-segmentation solution, and work that can’t be done without management-level approval and coordination.

Reporting Cadence

Micro-segmentation deployments happen smoothly when there are three levels of reporting, each with their own cadence:

  1. Technical Project Status: The team lead, vendor professional service engineers and others deeply engaged at the level of the work need to communicate regularly. We often see this on schedules as short as daily to a weekly status call at the long end. These calls are normally led by the PMs and tech lead and focus directly on the work. A wide range of people are invited to this call and participate as they are needed.
  2. Project Status: This is a call attended by the PMs, team lead, and interested managers and directors on both sides. Normally this is bi-weekly or once a month. It is designed to communicate overall red-green status, identify gaps, request escalations or additional resource, etc. It is not a technical call, but a project management call designed to keep both vendor and enterprise apprised of status.
  3. Executive Status: Enterprises deploy micro-segmentation to obtain specific business benefits. A monthly or quarterly cadence with the executive sponsor and vendor executives and account manager ensures that the project is still aligned with overall business goals and priorities. It provides a standing forum for both teams to interact and prioritize appropriately for project success. This is typically a 30 minute meeting but may stretch longer before key milestones like entering enforcement for the first time.

Five Places to “Lean In”

Every organization has both unofficial as well as official guardrails. Each member of the team has internalized what is acceptable for them to do, and when they need some sign-off, “air-cover”, or permission to take action. If the team doesn’t have it, progress will stall, and it is all too easy to lose a week or more if travel schedules delay internal meetings needed to resolve issues. In my experience, there are five places that the executive sponsor or trusted delegate can “lean in” and accelerate the project with a decision. Each of these are also the points at which something can go wrong with real consequences. The team will naturally be risk-averse at these moments, and they will be grateful when, after presenting their preparations and precautions, are told to go ahead and do what it is the plan. These are also points in the deployment plan that the management team is likely to receive phone calls, emails, or visits from the project team to ask for help, approval, or guidance.

  1. Permission to install micro-segmentation agents in bulk. The server/OPS team will have sensitivity at this juncture, even after full testing and validation. It is often helpful to have a “push” or executive inquiry to make sure that this happens and all the right notifications and processes have been followed.
  2. Approving (and demanding) a move out of monitor-only modes and dealing with breakage. When agents are moved into policy building mode, they take full control of the operating system firewall or install their own. While extremely unlikely given the testing and validation that will have taken place, there is still the remote possibility of affecting a workload. At some point, the team is going to need leadership to make the call to move forward and fix whatever goes wrong vs. analyze indefinitely. Expect your vendor to have guidance on how to make this as easy as possible.
  3. Approving the initial policy guidelines and resulting rules. As discussed above, the initial policy definition is an important goal for the team. They need to know that it is correct and acceptable so that everyone can run toward it. It also will be a rallying point for the technical lead to avoid scope-creep in the project. Psychologically, most security administrators are used to programming in rulesets that have been thoroughly vetted by an existing process. Writing the first set of rules will likely not use that process, and having a defined, approved initial policy provides the air-cover and comfort for everyone to operate.
  4. Ensuring that internal workflow and process are created correctly. Before entering the PROD environment, it is important to ensure that the team has coordinated with all the right people, processes, approvers, and stakeholders. Surprise is very bad when the stakes are high. It is good to inspect the process carefully and make sure that the team hasn’t missed anyone, or any executive peers that need to know.
  5. Approving (and demanding) a move to policy enforcement and dealing with breakage. Weeks of careful implementation and testing of the initial micro-segmentation policy will have the team feeling confident about a move to policy enforcement. In enforcement, micro-segmentation will block all traffic that is not expressly permitted. This is “a big step”. It is why many organizations purchased micro-segmentation, and often auditors are going to inspect the results. In a large project, it is almost certain that something will be missed, unknown, or otherwise un-accounted for. In the days or weeks that follow, something will be blocked, and phone calls will be made. This is true with hardware firewalls and it is true with micro-segmentation. At some point in the project, the executive sponsor will know that all reasonable preparations have been made. At that point, the team will need and want approval to proceed. Without clear leadership and communication, the organization will tend to bias towards the safety of moving sideways instead of the controlled risk of moving forward. The right executive decision will motivate forward progress and clearly communicate the priority of moving forward.

Managing the Vendor Relationship

Your chosen vendor wants to see your micro-segmentation project succeed. On the vendor side, we regularly communicate internally about each deployment to make sure that features, resources, and code are all available when they are needed. Treat your vendor like a strategic partner, to get our best performance. When we know not only “what you need”, but “why you need it” and “why you need it by when”, it makes it so much easier to move our extended team. If you hold your vendor at arms-length, and we can only see the very next step in the project plan, we are often unable to see the big picture and bring our expertise and lessons-learned until it is too late. Any project may be delayed, required to finish early, or any number of other outcomes. Communicate big changes early, and your vendor will be in the best position to absorb the changes and help you adjust the plan and execution.

There are several key partnerships that need to form in the early weeks of the project:

  1. Solutions Architect-ProService Engineer-PM-Tech Lead. This is the core technical working team. They will together do most of the technical work to see the project succeed. It is important that there is a free, open, and respectful dialog.
  2. Customer Success Architect-Director-Project Architect. This is the strategic working team. They need to know what is happening technically and be looking ahead of the project team to remove or minimize obstacles. This relationship needs to be comfortable enough that both sides can speak transparently about problems and challenges. This is the first “escalation” point for both sides if something is not going well.
  3. Account Manager-Vendor VPs-Exec Sponsor. This is the business-level working team that is responsible for results. This team handles any escalations between companies that may arise. Each side will have execution risk that should be understood and expressed at this level. This team should discuss more than the project at hand to include roadmap and additional opportunities and leverage points for micro-segmentation.

The executive sponsor that ensures each of these teams functions well will rarely be surprised on the downside and will find that most of the inevitable issues are handled without coming to executive attention except as status reports. No project is “self-managing” but when these three levels of relationship are well-tended, projects tend to run smoothly.

Managing Operational Integration

Most micro-segmentation solutions have a few components: a central policy engine and a host-based agent at a minimum. Complexity attached to a micro-segmentation deployment comes from the fact that these two components touch so many other things in the enterprise environment. There are several “best practices” that will assist in operational integration with existing systems.

Build a QA or Pre-Prod Test Environment

While both the internal and vendor teams will naturally focus on the PROD instances of the solution, ensure that the team sets up a small QA version of the micro-segmentation solution in a non-production environment. This platform will serve several purposes. Early on, it will be a place that internal developers and automation tooling teams can test and develop code. Operations teams can test logging integrations and event handling. Internal training classes can use the system for familiarization training. After the deployment is complete, this capability should be retained. Ensure that this pre-prod system manages one of each of your main OS images. In this way, new vendor code can be tested in the non-prod environment against the full set of PROD operating system images before rolling new releases into production. Ideally, your vendor deployment team can stand this system up as a single lightweight VM.

Setup and Test Logging/Event Alerting before PROD deployment

Unsurprisingly, OPS teams have the highest confidence when complete operational integration is complete before pairing production workloads. It takes time and effort to stream logs, parse them, raise alerts and build dashboards. This work provides full visibility, however into the health of the policy engine, the agents and the underlying systems. It is much easier for everyone to work in sensitive production environments knowing that all the necessary instrumentation is in place. Expect your vendor’s professional service engineers to bring recommendations on key log messages and to recommend alerts that have been popular with other customers.

Three different viewpoints need to be captured in the log analysis/event handing mechanism:

  1. Security. The security team will be most focused on the firewall logs and the anti-tampering mechanisms of the agent. They are always interested in policy and policy violations.
  2. OPS. The OPS team will be most focused on workload and policy engine health, and will want to know how to correlate system events with other data center events
  3. Dashboard. Management or NOC administrators will often need a consolidated view of the micro-segmentation deployment that contains highlights and the ability to drill down

When each of these concerns are reflected in the log/event/alert handling mechanism, confidence builds across the organization as many diverse teams realize that the project provides full integration that follows existing practice.

Invest in Automated Workflows

A micro-segmentation deployment will provide many opportunities to automate security processes that have long been a purely manual effort. In addition, the micro-segmentation labeling will review, and enhance investigating existing metadata sources, and combine them in novel ways. The resulting metadata is itself valuable and can be preserved for use by other systems and automation tasks. It is common for enterprises to have better metadata after a successful micro-segmentation deployment if a modest effort is made. This effort pays huge dividends in ongoing operation and expansion of the initial micro-segmentation deployment.

Agent Installation

Deploying a micro-segmentation agent onto hundreds or thousands of systems will involve some form of automation. In some cases this will be existing tooling, in others it will be built from scratch. But in many cases, the desire will be to integrate agent installation with automated build processes. Whether this is Chef, Puppet, Ansible, Salt, or other frameworks, there is an opportunity to build security into the standard automate lifecycle of the enterprise. Most enterprise datacenters have a mix of full automation using orchestration frameworks, and legacy environments without these tools. Taking the time to work through integration with the orchestration team where possible sets the project up for best success. Older environments that will not be getting the orchestration framework can be handled separately with custom scripting.

Policy Engine installation

Some of our customers also package the creation of the policy engine into their orchestration package. If policy instantiation has been automated, recovery from a server crash can happen almost as quickly as the automation can build a new policy engine. Organizations with a strong DEV-OPS motion will want to consider this.

Policy Engine Database Backup

All micro-segmentation policy engines have some kind of database behind them. If this database is corrupted or unavailable, it is likely that the solution will not work at all or offer undesirable results. Ensure that the OPS team has the necessary backups automated and is skilled in restoration and recovery according to your vendor’s procedures.

Label Assignment

The initial assignment of labels to workloads is commonly done through some kind of bulk upload into the policy engine. This will produce correct labels for the initial systems, in an initial state. Over time, labels will change. New systems will be added, some will go away. The more this workflow is automated, the easier it will be for all involved. This will involve codifying the label assignment in internal design documentation and deciding how to store, update, and retrieve it. Your micro-segmentation solution will always use labels, but these labels may be best maintained through centralized metadata management. Your DEV OPS team will likely have a strong opinion about metadata management, and it is wise to include their voice.

Metadata management

A micro-segmentation deployment builds security policy according to metadata assignments. This means that over time, your micro-segmentation solution will have a set of labels and other metadata that describes how things should interact. These labels usually aren’t custom made by your vendor — they are re-used from an existing source of truth.

This provides an automation opportunity. A good micro-segmentation solution will always recompute policy when labels change. So, if the metadata is maintained outside your micro-segmentation solution, this separation of duties can be leveraged for automation. When the micro-segmentation solution references an external “source of truth”, any metadata changes could notify your policy engine programmatically, and the rules would automatically update. With micro-segmentation, getting smarter about metadata management is the same as getting smarter about policy and policy management. Time spent thinking about where the metadata used to make labels is stored, and how it is updated, retrieved and fed to a policy engine is always a fruitful exercise. In other cases, information from the policy engine may be useful in updating existing CMDB systems. Micro-segmentation deployment will provide an excellent reason to consider how metadata is used and leveraged in the organization and it may provide impetus to automate those improvements.

Conclusion

A successful micro-segmentation deployment will improve the internal segmentation model, policy dialog, and level of security automation. Leading the team to that destination will involve new learnings and new opportunities. Micro-segmentation will alter portions of the existing operating model and will be best served by a cross-functional deployment team. As a leader, your input will be needed at several key junctures. By insisting on the having the right conversations around metadata and policy development, you have the opportunity to make a lasting difference in the speed, and agility of the business. You really can have fine-grained control and fast automation at the same time. I hope to hear about your success in deploying, operationalizing and running your own micro-segmentation deployment.

--

--

Nathanael Iversen

Nathanael is the Chief Evangelist at Illumio. He helps infrastructure and security teams understand, deploy, and operationalize micro-segmentation.