7 Things Architects Should Know About Flow
If you’ve read the Architect’s Guide to Record Triggered Automation, you already know that compared to Process Builder or Workflow, Salesforce Flow is better suited to meeting the increasing functionality and extensibility requirements of Salesforce customers. What you may not know, however, are the high-level issues and best practices that architects should be aware of when using Salesforce Flow. Without further ado then, here are seven things you should know about Flow.
1. Don’t hard code
As an architect, you’re likely well aware that hard coding values in Apex is a major anti-pattern. Similarly, hard coding business logic when using Salesforce Flow is a surefire way to slow down your development process and reduce your team’s agility. Instead, consider using custom metadata, custom settings, or custom labels in your flows in the following scenarios:
- Consolidate application or organization data that is referenced in multiple places
- Manage mappings (for example, mapping a state to a tax rate or mapping a record type to a queue)
- Manage information that is subject to change or will change frequently
- Store frequently used text such as task descriptions, chatter descriptions, or notification contents
- Store environment variables (URLs or values specific to each Salesforce environment)
To see how much cleaner custom metadata-driven flows can be, take a look at the before and after views of this After-Save Case flow that maps record types with queues. The solution uses custom metadata records that store a queue developer name, a record type developer name, and the department field to update on the case dynamically.
If you find your team is building dozens of screens with constantly changing logic and content, you may need a dynamic screen design pattern based on record data. If so, be sure to read the great article that Alex Edelstein, VP of Product at Salesforce, wrote on how to build a 500-screen-equivalent flow using only custom object records.
2. Use generic inputs and outputs in Apex actions
Flow is fantastic, but it does have its limitations — especially around queries, large data volumes, and working with collections. If you find yourself creating loops upon loops or hitting element execution limits, then it’s time to have reusable Apex do some heavy lifting for you.
Previously you could invoke Apex from Flow, but you had to write your code for a specific object or data type. Those days are gone, with Flow now supporting generic inputs and outputs. Generic code used in invocable actions will amplify your Flow capabilities across the board: Use one action for as many flows as you want!
Do you have the resources on your team (and the time) to write this additional Apex? Great! But you should also be aware of two outstanding repositories available for prebuilt Flow-invoked actions: the Automation Component Library and UnofficialSF.
3. Use subflows for cleaner, reusable, and more scalable flows
If your flow looks like the one below, you probably need a subflow (or many) — both to simplify it and to make it easier to understand the business process underlying this automation.
Here are some additional use cases for subflows:
- Reuse: If you are doing the same thing in your flow multiple times, or doing the same thing you did in another flow, call a subflow to do it for you. (Such subflows are analogous to utility classes in the development world.)
- Complex processes and subprocesses: If your flow involves multiple processes and branching logic, then consider making use of a master flow that launches other child flows. For example, you might have a master flow for managing contact data that launches separate subflows for associating contacts with companies, checking contact data, and managing contact relationships.
- Permission handling: Let’s say you have a screen flow running in user context but you need to grab a system object that the user doesn’t have access to. This is a prime use case for a subflow! By using a subflow with elevated permissions you can temporarily grant the permissions that user needs to continue the flow. Be extremely careful of using these in a community, especially if a guest user is running the flow.
Benefits of subflows:
- Make changes once instead of in multiple different places
- Take advantage of clean, concise, and more organized flows
- Maintain a single place for organization-wide behaviors like sending a consistent error email or showing the same error screen across flows (for example, by passing in the
Downsides of subflows:
- Subflows are not yet supported in record-triggered flows
- Debugging can be tricky with subflows. When using Lightning components and Apex actions you don’t always get detailed errors if the error occurred in a subflow.
- More collaboration between groups (and testing) is required when changes need to be made to the subflow.
- There is some UX overhead with subflows, so don’t go overboard and make unnecessary subflows. In programming, helper classes can inherit certain characteristics of other classes, which is not the case with subflows.
4. Create an error handling strategy for your flow-based automations, just like Apex
Before you send your team off to build flows, think about what should happen if an error occurs. Who should be notified? Should it generate a log record?
One of the most common mistakes for those who are new to building flows is not building in fault paths to handle errors (think of it as exception handling). The two most common uses for fault paths in this context are showing an error screen for screen-based flows or sending an email alert to a group of people for record-triggered or autolaunched flows.
For enterprise-grade implementations, consider using a comprehensive logging strategy in which flows log errors in the same place as your Apex code. You can use a tool like Nebula Logger to write to a custom Log object when your flow fails or just have an elevated-permission subflow create log records for you if you don’t need anything fancy.
5. Understand how scheduled flows affect governor limits
Selecting your record scope at the start of setting up a scheduled flow will have huge governor limit ramifications for your scheduled flow. Selecting your record scope here refers to this screen where you define the sObject and filter conditions for the flow.
When you specify the record scope in this way, one flow interview is created for each record retrieved by the scheduled flow’s query and the system creates batches of 200 records, much like in Apex. This means your Flow DML will be consolidated per normal bulkification procedures (also see the Bulkification section here). If your entry criteria grabs 800 records and you needed to update all of them, Salesforce will do so in four batches of 200.
The maximum number of scheduled flow interviews per 24 hours is the greater of 250,000 or the number of user licenses in your organized multiplied by 200. If you need to work with more records than that, it’s a good idea to use an invocable action (and not specify your scope here in the entry criteria in the Start element). Keep in mind that although the flow is bulkified, the flow iteration limit will still take effect. If a record defined in your scope exceeds the 2,000 element iteration limit, you will get errors.
Let’s say you decide not to use the record scope functionality and instead decide to perform a Get Records, using the results as your source. In this scenario, Salesforce will treat your Flow as one transaction, meaning you will need to spend more effort optimizing for Apex CPU time and DML as the Flow Engine will not batch your records for you. Pay special attention to the executed element limit for this scenario as well as CPU time and query sizes. You typically see things like invoked Apex in these scenarios since the code will handle your DML/CPU/SOQL optimization and not the Flow engine. There isn’t a right or wrong way to run a scheduled flow, as both have pro’s and con’s.
Knowing this, do not select a record scope and perform a Get Records step for the same set of records; you will effectively be multiplying the amount of work your flow has to do by N² and you will hit limits quickly.
For more on this, check out the official documentation on Schedule-Triggered Flow Considerations.
6. Build a bypass for data loads and sandbox seeding
This isn’t a Flow-specific best practice, but it’s a good idea to include a bypass in your triggers and declarative automation. With such a bypass strategy you can disable any and all automations for an object (or globally) if you ever need to bulk import data. This is a common way to avoid governor limits when seeding a new sandbox or loading large amounts of data into your organization.
There are many ways of doing this — just be consistent across your automations and ensure your bypasses are all in one place (like a custom metadata type or custom permission).
7. Be aware of looping behaviors
You know that you shouldn’t put DML inside a loop. In a flow, there are two other specific situations to be aware of when looping over records.
The executed element limit
Every time Flow hits an element, this counts as an element execution. As of Spring ’21, there are currently 2,000 element executions allowed per flow interview. Consider a scenario where you are looping more than 1,500 contacts.
Within your loop, you have your loop element (1), a decision element (2), an assignment step to set some variables in your looped record (3), and a step in which you add that record to a collection to update, create, or delete later in the flow (4). Each of those four elements within the loop will count toward your execution limit. This means your loop over 1,500 records will have 6,000 executed elements, which will far exceed the iteration limit.
When approaching this limit, you’re likely going to need to either get creative with forcing a ‘stop’ in the transaction or by making your flow more efficient by using invoked actions. Keep in mind that the transaction ends when a Pause element is hit or, in screen flows, when a screen/local action is shown.
For reference, see Flow Error ‘Number of Iterations Exceeded’.
Complex formula variables
From the Record-Triggered Automation Decision Guide: Flow’s formula engine sporadically exhibits poor performance when resolving extremely complex formulas. This issue is exacerbated in batch use cases because formulas are currently both compiled and resolved serially during runtime. We’re actively assessing batch-friendly formula compilation options, but formula resolution will always be serial. We have not yet identified the root cause of the poor formula resolution performance.
Now that you know more about the seven things that all architects should know about Flow, you may want to take some time to review your own flows and look for opportunities to remove hard coding, add subflows, implement a more consistent error handling strategy, or apply any of the other concepts covered here. Here’s a quick rundown of some of the links mentioned in this post and some other helpful resources: