It’s true – we all celebrated Process Builder as a game changer when it was introduced to our beloved Salesforce platform about two years ago. Then we found out that is had issues with bulkification. Then we learned that we can do quirky things with it, break governor limits. And we learned that it will send exception messages in a completely new way. And then, there’s the issue with deploying them.
It’s not even hidden in the fine print. Anyone interested in reading the “considerations” is free to do so:
Rakesh Gupta wrote about that behaviour implicitly here
To sum it up: New Process Builder processes and Visual Workflows are deployed (by change sets) inactively. So when you run all local tests to validate a deployment, you will never see the results of the current flow / process. Which is bad, because flows can break things and unit tests can help to find the misbehaving flows.
But it can get worse, even. If you model unit tests to evaluate the outcomes of flows, you will hit a dead end: Because you will never see the flow’s behaviour in tests before you deploy and activate a flow.
Just lately, I had the following case:
We had moved a triggered record creation from a TriggerHandler to a process to allow admins to change the triggered action. We still had a unit test in place that verified that the records were created properly – and I thought it would be best to use this unit test to monitor if trigger and process would work well together.
We ended up with a changeset including the following:
- a new field to evaluate
- a new process
- a new outbound email
- a refactored trigger handler
All of it together moved the existing functionality from code to process builder but maintained the unit tests and the more complex stuff in the trigger handler.
But deployments wouldn’t work, because of failing assertions (true: the trigger handler didn’t create the records anymore, and the process wasn’t yet running).
So we had to split the changeset into three:
- all dependencies (fields and messages) (running all local tests)
- the process itself (running no tests)
- the trigger handler (running all local tests, again)
That left us with an untested stage between the second and third deployment, but it worked.
There was one flaw, still: First we tried to deploy the full change set as a third step. But again, it failed, because it contained the flow version again and this deactivated the flow. D’Oh!
And now for something completely
different the same
Just now, I had an unexpected error in my CI deployments, run by Circle CI with heavy use of the Salesforce CLI and Salesforce DX. Look at this miraculous error:
=== Component Failures  TYPE FILE NAME PROBLEM ───── ────────────────────────────────────── ─────────────────────── ─────────────────────────────────────────────────────────────────────────── Error src/flows/HandleUnsubscribe-1.flow HandleUnsubscribe-1 The version of the flow you're updating is active and can't be overwritten. Error src/flows/HandleUnsubscribeFlow-1.flow HandleUnsubscribeFlow-1 The version of the flow you're updating is active and can't be overwritten.
Still, I really like the concept behind Process Builder. But I’m seriously worried that these deployment issues drive people into building flows and processes straight in production. And not covering them with tests. And I’m really worried that this could be an early and unexpected end to my ventures into automated deployments.
Any ideas, solutions?
How do you handle process builder deployments?
How are your experiences with flows in DX driven CI workflows?