top of page
Writer's pictureMichael Kolodner

My FlowLog Object

I sometimes wonder if people are using the things I build. I can tell if they're filling out fields, of course, or creating records of a new object. But many of the things I build only happen in the background. They're flows that automate a process to speed it up or make it more efficient. And the impact of those is a whole lot harder to measure.

Freebie the Puppy holding blueprints and overseeing a construction site.

At the same time, it can be tricky to build, test, and debug automations even with the great improvements to Flow Builder. Sometimes I have trouble visualizing what is happening within the steps of a flow. I need to see the results "physically" represented so I can track where parts are moving. In a screen flow it's easy enough to add interstitial screens that just show me the values of variables or display a message to let me know where I am in the process map.


But you can't do that in record triggered flows. They run invisibly while you're working in the user interface, doing their magic before or during the save of a record. So I often need some other way to keep track of what is going on.


At times I've used Chatter posts, either on a created/updated record or tagged to a special Chatter Group for aggregating such posts.

A Chatter Group page showing one post that says that a particular contact was updated by the Skyvia integration user today.

In a pinch I've also created Tasks, since they're easy to create and even to relate to records that are being updated.


Both of those methods work. I sometimes even still go that route if I just need something in the moment. But they're really only good for the debugging use case. They're not so good if you want to actually be able to report on how often a flow has fired or to capture structured data into fields for sorting and analysis.


Introducing FlowLog

So I built a custom object and I gave it the super creative name of "FlowLog." Because, well, I use it for logging Flows.

The record detail page of a FlowLog, showing the name of a flow, the step being logged, and notes about what happened.

Besides the flow name of the flow, I created fields for the Version (which, to be honest, I rarely use), the Step, and Notes. I decided to let FlowLogs be auto-numbered because that would make them easy to sort in order of creation, even when several might be created during the same instant.


Now that I have an object to store my logs, it's very simple to have a Create Record step that quickly drops in one of these records at any point in a flow.

The Assignment step that gives a FlowLog record variable values for Flow, Step, and Notes.

Easier with a Subflow

I've actually made FlowLog even easier to use by creating an autolaunched flow. Then I can drop that in as a subflow in any other flows I'm building. In another of my brilliant feats of creative naming, I call that flow FlowLog.

A flow canvass with two elements: An Assignment and then a Create Records.

No brilliant magic to that subflow. It can accept variables for the flow name, the step, and the notes, which it uses to assign a FlowLog record variable. Then it inserts that FlowLog record. (I could have just done it as a single Create Records step, but I'm used to doing the assignment before the record insertion.)


Then I can put the subflow in multiple places within a flow, either logging steps along the path or handing it different variables to indicate that it's firing at different ends of decision paths.

A flow canvass with three End points after branching at Decisions, each preceded by a FlowLog element.
Each of the blue boxes is a FlogLog subflow that gets a different value for Step.

Don't Repeat Myself

I don't want to have to build this each time I need it for a new client. Not that creating a custom object with four fields on it takes more than a couple of minutes. (Though creating each of those fields takes more clicks than it ought to. Ahem.) But recreating the subflow would also take a moment.


No, the way to go here is to store my work so I can install it multiple times. There are three main ways I could accomplish this:


Unmanaged Package

I could save this object and flow as an unmanaged package in a dev org and then install it in client's orgs. You can modify the elements of an unmanaged package in your instance all you want—that's what "unmanaged" means—so this is functionally the same as building the elements in the client org.


The only thing that bothers me about an unmanaged package is that it shows up as an entry in Setup>Installed Packages. An admin might not recognize that this isn't a commercial managed package. I want this to be metadata "owned" by the org that they're free to change/remove/etc. And if that's the case, then a listing in Installed Packages just seems like clutter, to me.


Metadata Deployment

If you build in a sandbox, you can deploy your changed metadata to production or other sandboxes using Change Sets. But those deployments have to be between related instances. (Production and the sandboxes of that production instance.) I, of course, have not built FlowLog in a client's sandbox. But there are tools that can do a cross-org deployment, including Copado and Gearset. I use Copado Essentials for deployments all the time—it's much faster than change sets. So I could store FlowLog in a dev org and then use Copado to deploy it somewhere else.


The benefit to installing this way is that the object and the flow look like they entirely belong to the client's org, as though they were built in a sandbox using clicks and then deployed to production, the same as any other objects or fields. There's no practical difference here from the unmanaged package and we've avoided that confusing entry in Installed Packages.


Code Repository (A "repo" for those in the know. 😉)

There's one final option, which is to store my metadata in a code repository, like GitHub, and then push the metadata to a connected org using SalesforceDX or CumulusCI (CCI). In terms of the function within a client org this option is no better nor worse than the cross-org deployment. The elements still end up in the client org looking as though they were built there.


The benefit to storing what I build in a code repository is that I could share it with others and collaborate on further development. Or even for my own purposes, it's just nice to be able to quickly spin up clean scratch orgs to work in and then save my progress or throw out work that was a dead end. (Plus I have the metadata stored on GitHub and easy to back up on my computer if I should somehow lose access to my dev org.)

The file viewer in VisualStudio Code showing the files to define the FlowLog object, fields, and flow.

To use this method I had to learn the Salesforce command line tools and learn to use GitHub Desktop and VisualStudio Code, all of which took me outside my declarative comfort zone. (And I mostly need a cheat sheet to remember the handful of commands I even understand how to use.) But there's a very good Trailhead trail that takes you step-by-step to install CCI and understand some of the benefits of using it.


Try This At Home

Nothing I've described here is a great innovation. This is just a good idea that I think anyone can use.


You can build your own FlowLog object exactly like mine if you want to. In fact, I encourage it! (Heck—I don't even mind if you use the same name!)


I suppose I could make my GitHub repo public and you could just grab it right from there. But that seems like it's spoonfeeding you too much. I've taught you to fish. 🎣


Now go get a little practice behind the Setup gear. ⚙️

412 views

Recent Posts

See All

Don't wait for the next post! Get them in your In Box.

bottom of page