Salesforce “Go with the Flow”: How many record-triggered Flows should you have?

Markus Fröhler
Capgemini Salesforce Architects
12 min readJun 15, 2022

Since Salesforce announced at Dreamforce ’21 that Workflow Rules and Process Builder will eventually be retired, this has been a matter of heated discussion in the ecosystem. The replacement is Flow, the new, shiny one-stop shop for all kinds of automation. Salesforce has made huge investments into tools like Flow Trigger Explorer, Flow Orchestrator or Flow Tests, which have resulted in an outright flood of features.

This article tries to shed some light on best practices for solution designs based on Flow. Specifically, it is about record-triggered Flows and the question whether to restrict yourself to one Flow per object or not. We want to find out if and how this familiar pattern for Process Builder translates to the brave new world of Flow.

Innovation with Flow

In the True to the Core session at Dreamforce ’21,the retirement of Workflow Rules and Process Builder was first announced. Although there is no hard end of life date yet (probably around 2025), Salesforce plans to end the ability to create new automation with Workflow Rules in Winter ’23 and new Processes in Spring ’23. This drawdown is accompanied by continued investments in new Flow features. Salesforce’s announcement has initiated lively discussions within the community on issues like how to tackle the upcoming migration or how to make use of the new features in an efficient way.

Salesforce has begun to share a growing stream of content on the latest innovations for record-triggered Flows under the topic “Process Automation” on the Salesforce Admin blog. One piece that is particularly relevant to the topic of this article is an episode in the “Automate This” blog/video series called Migrate Workflow Rules and Processes to Flow. In this episode Jennifer W. Lee and Salesforce MVP Aleks Radovanovic discuss best practice on approach and patterns for migration towards Flow.

Additional materials like the Decision Guide for “Record-Triggered Automation” are part of the Salesforce Architect site. I want to highlight the last section of that guide as it includes technical details about how the new runtime for record-triggered Flows works internally. This can be particularly valuable in case performance, bulkification or recursion play a major role in your solution design.

It is simply not possible to cover all aspects of that migration and the new automation tools in a single blog post. Instead, I want to provide some references for you to start your own self-education journey and then focus on one particular question: How many record-triggered Flows should one have per object? Is the once agreed upon guideline for Process Builder still applicable for record-triggered Flow?

How many record-triggered Flows per object?

Originally this paradigm comes from the Apex world, where there is no way for a developer or admin to control the order in which multiple triggers on the same object are executed during the order of execution. Therefore, it is considered best practice to wrap everything in one trigger, which then delegates to different handlers or logic classes. Some advanced trigger frameworks like Table-Driven Trigger Management (TDTM) from the “Nonprofit Success Pack” even go one step further and introduce a metadata driven way to orchestrate multiple triggers on the same object. But even then there may be triggers as part of a managed package, which can interfere with triggers in TDTM. From my understanding the main rationale behind the single-trigger paradigm is to gain control and predictability over the order in which things run.

A similar paradigm has been suggested and commonly agreed upon by Salesforce for Process Builder. This seems logical as there is no control about the order the Processes will execute if for a given save event (like create or update) more than one Process is active. In addition, measurements have shown that the initialization of a Process leads to a significant performance overhead. Therefore, this can be considered another reason why the number of Processes per object should be reduced.

Measurements to back our arguments

Speaking of measurements, there needs to be an established methodology first in order to produce results that can be discussed and reproduced. In that regard I want to highlight a VirtualDreamin session “The Return of The Dark Art of Benchmarking — This Time It’s Declarative!” from 2020 by Dan Appleman. He picks up the topic from his earlier talk “The Dark Art Of CPU Benchmarking” together with Robert Watson at Dreamforce 2016 — but now with a focus on declarative automation. The key takeaway is that all benchmark numbers are lies and must always be understood in their context and how they have been measured. Experienced architects understand that those numbers can depend on a multitude of different factors, such as if you measure in an empty scratch org vs. a full copy sandbox. Also, one needs to consider what other automations are in place and how they influence the benchmarks. But on the other hand, if you are aware of the methodology and the setup, you can very well compare numbers for different scenarios and back your design decisions with reproducible facts — at least until the next Salesforce Release is rolled out…

Inspired by this groundwork, Luke Freeland created a whole series of blog posts on “Salesforce Record Automation Benchmarking”, where he, in addition to his findings, provides a basic benchmark package as a Git repository containing an elegant setup for benchmark measurements. This setup does not depend on debug logs, which as Dan Appleman pointed out will create a huge impact on the results.

I have adjusted the setup and re-ran the benchmarks on an empty Summer ’22 scratch org. The main change to the automation was to always perform 10 field updates instead of a single one in order to generate some additional load. And I adjusted the entry criteria to use a formula (new feature) that contains the ISNEW function as well as ISBLANK checks for all 10 fields.

Benchmark Results: Comparison of Automation Tools

Just like in the blog series and also with this modified setup, I could verify the key results originally presented by Dan Appleman:

- Apex trigger is slightly faster than flow but the difference is not at all significant and should not influence your decision between Pro Code and Low Code. This finding by Dan Appleman has since been confirmed multiple times and from my point of view can be taken for granted. In my measurements on Summer ’22 on an empty scratch Org I noticed a difference of a mere 50–60 ms for 200 records.

- Process Builder is really slow compared to Apex and Flow. So you can expect significant performance gains from the migration from Process Builder to Flow. If you have the chance, go on and impress your stakeholders with some before/after benchmarks. In my setup, I could measure 760 ms for Process Builder vs. 180 ms for a corresponding before-save and still only 380 ms for an after-safe Flow.

- Workflow Rules seem astonishingly performant. They position themselves right in the middle between before-save and after-save Flows, especially if you consider, that they need to perform an extra DML, this seems remarkable. Still, there is a benefit of migrating Workflow Rules to Flow: if you can avoid the extra save cycle using a before-save flow, this will skip any additional logic that might be in place. And with Flow there is much more flexibility when you design entry conditions, decisions, and actions for a group of related Workflow Rules. This way the number of DMLs and flow interviews can be optimized.

Measurement single vs. multiple flow scenario

One of the additional scenarios that Luke Freeland analysed in the article “Salesforce Single VS Multi-Object Flow Field Update Benchmark V2", is the difference between single vs. multiple record-triggered Flows per object. I have adapted the scenarios and basically come to the same conclusions with the Summer ’22 Release. In my setup the single flow was built in a way that each decision node is evaluated, just as if the conditions were evaluated in a Process with “Continue” after each action.

Both outcomes of each decision are linked to the next decision.
Setup: Single Flow Multi Criteria Field Update

The corresponding setup for multiple Flows guards each Flow with strict entry conditions, this way no unnecessary flow interviews are created. In that scenario there is no significant difference at all to be detected. Of course, this is only one example and for your requirements a different setup could be more interesting. With the Git-Repository you can easily build and measure according to your specific needs. It would also be possible to deploy the framework in your sandbox and measure in a more Production-like scenario — but be aware that other automations, validations, data volume etc. will have an impact on the results.

Benchmark Results: Single vs. Multi Flow

Based on these findings, I conclude that performance overhead is not an argument any more to justify a restriction to one record-triggered Flow per object/save event. And with the new Flow Trigger Explorer and full control over the Flow trigger order, the second argument is also obsolete. So, from my point of view, there is no longer a need to dogmatically put every automation into one monolithic Flow. I also see no reason for a strict recommendation towards one Flow per object, save event (create, update, delete), and trigger event (before-save, after-save).

But what now? Should we go to the other extreme and randomly create flows for each new requirement? In addition to all those simple flows that the handy migration tool from Salesforce produces for each of your Workflow Rules, this most likely will end up in chaos. Even if you are able to control the trigger order, at some point the sheer number will go beyond what the human mind can handle — or what can be displayed on a single screen in the shiny new Flow Trigger Explorer.

One note on the migration tool: I can see the effort the teams at Salesforce have put into this and I’m looking forward to its extension for Processes. From my perspective, it is a very helpful tool to get a quick translation of your existing automations into the “language” of Flow. In addition, the tool points you to any specialties that require extra consideration, such as shifting logic to before-save or certain edge cases that might behave differently in Flow and thus require extra care during regression testing. But I don’t recommend the tool as a point-and-click solution to quickly finish off your migration project. You want to understand and adopt what the tool produces more as a starting point than the complete solution. And under no circumstances at all should the tool be used directly on Production, even though the handy “Switch automation” button might suggest so.

Recommendation

Salesforce has reduced technical restrictions and given us more control in order to allow for design decisions based on business requirements instead of technical limitations. This, however, does not mean that there are simple answers. It enables us to provide well architected solutions to our customers to maximize their return on investment.

My recommendation to fellow Architects, Admins and Developers is to apply common design principles. One that plays a big role here is about tight and loose coupling. Tightly coupled automations allow for more optimization but lead to increased complexity. That for example means that if you manage to combine similar business logic into a single flow, you are able to re-use the evaluation of a common entry condition or you can optimize the number of queries or DMLs. It is very much like in Apex, but does not yet have the same potential as Apex triggers, since there is no way to share context between before-save and after-save flows. In Apex you can use static variables for that purpose, which opens a whole extra level of potential optimizations.

But tight coupling always comes at the cost of increased complexity. If the resulting flow is hard to understand, or subject to frequent conflicting changes by different teams, thus error prone, it might be better to divide it. Everybody who has ever had to review a pull request or solve a merge conflict with a big Process Builder XML file can attest to that. This argument is further strengthened by the lack of automated tests that you could simply re-run after such a merge to be at least somewhat confident about the result.

However, although loosely coupled automations show less internal dependencies, they may imply redundant work. This is the case not in regard to technical overhead as our measurements showed but in regard to the actual logic. Separate flows may need to execute the same queries to get their input data or they will update the same records. The flows tend to be smaller and easier to maintain, but if there are hidden dependencies between them the benefit may only be superficial.

A domain driven approach that identifies related business logic and combines it into a single flow seems a natural first step to me. In contrast, if requirements are coming from different business units, it is probably not a good idea to place them in the same flow. In general, every design decision is a tradeoff between conflicting targets. It definitely helps you and your colleagues to document the rationale behind those decisions. This can range from using self-explanatory names and making use of the description fields to maintaining an extended documentation of the arguments behind the solution in a design document.

Remaining feature gaps

At the customer where I currently work, we are heavily relying on platform events to break complex business logic into small and easy to handle transactions. At the end of one part a platform event is published. This event is then subscribed to by a Process and depending on certain criteria other automations, mostly in the form of Flows, are executed. This concept is similar to what Orchestrator introduces but was introduced way before that. Also, Flow Orchestrator includes Screen Flows with interactions from different users.

For this scenario we currently face a feature gap between Process Builder and platform event triggered Flows, because the latter still cannot contain subflows. For record-triggered flows that run after a record is saved or before a record is deleted, this capability was added in Winter ’22. But according to Diana Jaffe’s answer during the “Release Readiness LIVE Summer ’22: Admin Preview Live” (timecode 1:05) this feature for platform event triggered Flows happens to be a rather low priority in the Flow backlog. The reason is that platform event subscriptions by Processes with Flow actions have not been used frequently enough so far.

You may find similar feature gaps if you start to assess legacy automations. I expect that we will see some exceptions to the announced retirement roadmap, if the gaps cannot be closed soon. In your Org this may lead to a co-existence of legacy and new tools. But nevertheless, it is a good idea to have the target scenario well defined and start to shift towards it wherever possible.

Concluding thoughts

As a conclusion to all of this I would suggest seeing migration as a chance to streamline your architecture and consolidate the number of automation tools used. As pointed out in this article, Salesforce has provided us with more freedom for design-oriented decisions. Technical restrictions that enforced certain patterns like the one Process per object paradigm or fine-grained and isolated Workflow-Rules have now been overcome with Flow. It can be considered best practice to make use of the new options under given constraints to build future-proof and well architected solutions.

The switch to Flow can also be an opportunity to challenge business and discuss whether specific parts of the implementation are still needed or if meanwhile there are more elegant alternative solutions available. Instead of a lift-and-shift approach for a migration I would highly recommend taking one step back and starting with a design phase that takes into account all of the presently available features.

If not done by now, also consider setting up a way to document your automations. A natural initial approach might be on a per object basis. The visualization from Flow Trigger Explorer can be an inspiration. If you manage to supplement this with other automations from triggers and legacy tools, this has the potential to give a comprehensive overview. Such documentation should ideally illustrate where in the Order of Execution the different automations are located, as this is a very frequent source of bugs. Sometimes it can even be helpful to illustrate scenarios that span multiple objects. For example if you have transitions from Quote to Order or from Order to Contract. Your team members and successors will be glad about proper documentation. And it can serve as a foundation for discussions about new features and solutioning of new requirements.

--

--