DevOps in Salesforce: Considerations for your deployments

Aaron Allport
Slalom Technology
Published in
10 min readSep 24, 2020

Welcome to our fourth post in a series about building an effective Salesforce deployment pipeline that your whole team will love. Our first post introduced DevOps in Salesforce and our second covered the planning aspects to be considered. In our previous post, we looked at the implementation of a pipeline. This post describes some considerations for Salesforce deployments.

Written by Aaron Allport and Stuart Grieve

Photo by Robert V. Ruggiero on Unsplash

Effectively deploying profiles and permissions between environments

Profiles are “unique” in how changes are migrated between environments because Salesforce will typically take the delta of changes in the profile that pertain to the artifacts being deployed. This, therefore, means that it is incredibly hard to keep profiles in-line across environments without the use of a dedicated tool to perform comparisons against profile extracts.

Profiles have always been a challenge. I’ve spent more time than I’d like to admit with two monitors trying to compare profile markups to validate changes as part of releases.

Martin Gardner, Solution Principal

It can quickly become infuriating when managing a profile in the repository (that is the source of truth) and seeing an entirely different result than expected after a deployment. Additionally, profiles in the repository need to be incredibly large as they need to contain explicit true/false values that pertain to the various settings and object/field security. This is necessary as these permissions change throughout the build of the configuration and code in any Salesforce project. Since the release of SFDX, Salesforce has advocated the use of Permission Sets, as these are immutable and explicit, yet more granular and manageable for a specific set of permissions. Whilst profiles won’t go away, there is a new way or reliably managing user permissions across different environments.

Only use profiles for what cannot be contained within a permission set

Salesforce Profiles contain a lot of information. Some of this information such as login hours and allowed IP range access cannot be defined in a Permission Set, and therefore needs to remain in a Profile. This Profile-specific information is the ONLY information that needs to reside in a profile. Given the nature of Permission Sets granting additional access, the default state for a given profile is no access to anything. It would be prudent to create this “no access” profile at the earliest stages of a project from which additional profiles can be created for each business role. By combining Profiles and Permission Set Groups (explained below) that are in-line with one another, automation can be applied to the assigning of permissions. With the immutable permission-sets that are contained within Permission Set Groups and aligned to a Profile (and therefore job function), permissions can be reliably managed at the source in the repository.

From the outset there should be an agreed approach to profiles vs permission sets. Given the challenges of deploying profiles; permission sets is definitely the way to go. But they need to be documented, managed and maintained. Think about your admin setting up a new user, how do they know what permission set/permission set groups to assign?

- Minesh Patel, Solution Principal

Permission set groups are the new Profile

With profiles now essentially “empty” of configuration save for the items that cannot be controlled by Permission Sets, Permission Set Groups are now effectively representing the grouped permissions that would have previously been held in the profile. By creating Permission Sets (which by their nature are granular), these can be grouped together to form a Permission Set Group representing a business function. What this practically means is that the repository will contain an equal amount of non-administrator profiles and Permission Set Groups, and a higher number of Permission Sets that represent the access required by users of Salesforce. These individual Permission Sets are re-used across different Permission Set Groups (such as a Permission Set to access and edit Accounts, Contacts, and Opportunities being re-used by the “Salesperson” and “Data Manager” Permission Set Groups).

Automate permission set group assignment

With a 1:1 ratio of Profile to Permission Set Group, it is possible to use automation to automatically assign a Permission Set Group to a user when they have a corresponding Profile assigned to them. The following describes the series of steps you’d implement in a User object trigger handler class:

  • Determine if the change was to the ProfileId field
  • Get the name of the profile being replaced (Profile object, using the old Profile ID)
  • Get the name of the profile being assigned (Profile object, using the new Profile ID)
  • Unassign the permission set group matching the name of the profile being replaced (PermissionSetGroup and PermissionSetAssignment objects)
  • Assign the permission set group matching the name of the profile being assigned (PermissionSetGroup and PermissionSetAssignment objects)

The logic could become more or less complex depending on the needs of your organization but can be a powerful tool in ensuring permission consistency between environments.

Merge Strategies for Processes and Flows

If you’re working on a greenfield project, or perhaps within a small team, chances are your project’s source control is relatively simple. It’s likely, however, this won’t always be the case. Salesforce projects tend to become very vast — which is no surprise given what the platform has to offer.

If you’ve been involved in any Salesforce releases, you’re probably familiar with the most common cause for failed deployments: dependencies. In layman’s terms — when one piece of metadata references another, you’re not going to be able to deploy one without the other. Therefore, when you’re merging your changes into source control you’ll be wanting to pay particular attention to ensure you only commit your changes.

Now for some metadata types, this doesn’t take much effort. For example, record types. Here’s a scenario, Ron has created a new picklist field (New_Field_1__c) on the Account object which is ready to deploy for testing. He knows his new picklist will be included in the Account record types, so also retrieves them. Analyzing the diff, he sees Leslie is also working on the object. Let’s take a look at the metadata:

If Ron simply commits the whole record type, the CI process is going to fail because New_Field_2__c hasn’t been included. Instead, he’ll commit lines 187 to 197 — deploying his change only and leaving Leslie to progress her feature. Unfortunately, not all metadata is this easy to dissect.

Processes and Flows are two types which, over time, can get extremely complex. Let’s take a large Service Cloud implementation, for example. The majority of the team is going to be working on the Case object one way or another. Since Salesforce recommends using one automation tool per object, if that tool is a Process — things are going to get complicated. Let’s look at a snippet of Process metadata:

Sure, a seasoned Salesforce professional could probably pinpoint their change and avoid committing any missing dependencies; but this is an admin-friendly declarative tool which many would struggle to consistently do.

To minimise these challenges, ensure the functional team are trained in how to deploy their own changes and the team has a consistent approach so that everyone is doing the same. But do plan in support time from your DevOps expertise, as it may take a few goes before the functional team are comfortable with the process. But once they are, it should work like clockwork.

- Minesh Patel, Solution Principal

Our advice? Outline and implement a strategy for merging complex metadata types that work for your team. For example, perhaps you have one of those seasoned wizards in your team who can take ownership. Another approach we have adopted is to carry out a single merge at the end of a sprint, or in intervals whereby the team has agreed to complete and merge any dependencies. This may, however, not be fit for all agile testing methodologies. Furthermore, consider splitting your processes and flows into different contexts. It is entirely feasible to have On Create and On Modify processes utilizing isNew() in your criteria. This way you can have two processes with 10 nodes, instead of one giant with 20 nodes; all whilst being confident what’s executing, and when.

Working with Salesforce Shield

Salesforce shield has its own challenges when deploying large amounts of encryption schema change. Salesforce shield allows up to 100 encryption schema changes at any one time. Whilst this may not be a problem for deployments day-to-day, it does present a challenge when deploying to an environment or newly refreshed sandbox (without Shield enabled or using a completely fresh schema) for the first time. The way around this limitation is to create from the package.xml file one or more “pre-package.xml” files that apply changes to the Salesforce object definitions only at not more than 100 encrypted fields at a time. The original package.xml file should remain intact. Essentially, you’re creating mini package.xml files that contain at most a few sections, concentrating on subsets of members of the CustomObject metadata type.

Photo by N I F T Y A R T ✍🏻 on Unsplash

It will be necessary to manually run these “pre-builds” to apply the sub 100 field limit to the environment one or more times (for the different affected objects) before the full main deploy (as a one-time activity whenever an environment is refreshed to a non-encrypted schema). These “pre-builds” should be limited purely to the object schema definition and therefore have no dependencies of their own. For example:

sfdx force:source:deploy --manifest manifest/pre-package-1.xml --loglevel error

What to Include in Your Build

There is absolutely no reason why everything cannot be put into the repository (or at least what the Metadata API supports). Your repository represents the single source of truth for your orgs and with a fully automated, scripted workflow, changes to the repository will make their way through the deployment pipeline. You may, or may not, choose to also include your entire repository in your releases.

Think of the package.xml as the definition of your build. The metadata referenced here will be included in your deployment, regardless of whether a change has been made or not. Consequently, every time a deployment is carried out, the content of your build is verified as Salesforce compiles the deploy. It’s easy to see why including your entire repository in the build is advantageous — since any issues introduced would cause a failure and provide instant visibility. This approach does, however, come with its own caveats. As your org grows, and more and more files are added to the repository, you can also expect your build times to increase. Now sure, if you perform a Production release every few weeks it may not matter whether the deployment takes five minutes or twenty. However, apply these build times to your CI process where a large team is consolidating their work to one sandbox and we’ve got a problem.

For this reason, we believe there is no ‘one size fits all’ solution for what to include in your build. Many projects are likely to outgrow the state whereby including everything at every stage of the pipeline is feasible over time. The key is to assess each stage and tailer the approach by determining the priorities, such as those discussed above. For example, perhaps at the stage whereby features are validated against your development consolidation org- speed is the priority. The developer could define only the components they have changed in their build to gain visibility of a successful compile. When it comes to the stage where features are merged, perhaps introduce running unit tests. When a release climbs closer to a Production deployment, speed may no longer be the priority; and instead ensuring a level of rigor must be applied to the lifecycle is likely paramount. At this stage, including everything in the build will again likely be beneficial.

‘DevOps lifecycles can be truly dynamic, and consequently it is critical that you have a strong definition of roles and responsibilities within a project team, along with an agreed structure for the frequency and nature of your deployments. Planning this out and relaying it clearly is a crucial step for a successful Salesforce implementation and the ongoing maintenance and support beyond.’

- Nathan Brunt, Consultant

In our experience, the more complex your build, the more involved your deployment automation, the more nuances and quirks of Salesforce will become apparent to you. Problem-solving is key, so knowing where to look on the dev forums and success boards is vital. Also, we’d advise keeping an eye on release notes. This post only scratches the surface in highlighting some issues you may encounter when deploying to Salesforce. We’d love to hear more on your experiences and tips/tricks, so please engage with us!

Thank you for reading along with us as we’ve taken the DevOps in Salesforce series from theory to implementation. Follow both Stuart and myself future content updates or to get in touch.

--

--

Aaron Allport
Slalom Technology

Salesforce Solution Principal at https://www.slalom.com. Father, web developer, motorcycle rider.