I’m fairly new to n8n but am loving it a lot.
My current situation.
I am creating a fairly complex flow using API calls, creating google drive folders, inserting google sheets into them, scraping websites, activating processes from 3rd parties and having to wait and poll until they finish.
Long story short, it’s quite a complex workflow, and I’m really struggling to separate the various parts of the workflow and reliably test them.
How are you guys able to test and experiment without the entire workflow has to be executed every time? Would love to understand your methods!
Thanks in advance
Don’t have a great answer, but key things are
- Using the full execute workflow. Running node by node often breaks references in later nodes
$('<node name>').item.json references instead of the default
$node["node name>"].json you get when you drag/drop. This is crucial for referencing the correct data from upstream nodes occurring before IFs, Switches, and merges! But they can prove to be more flakey and often look like they don’t validate in edit mode, but work properly when fully executed
- Pinning data, and doing it carefully if working with multiple items from multiple nodes. I also can’t guarantee it works perfectly with the above points
- Use and call/execute sub workflows if you can breaks things out into logical and repeatable units
- Avoid bringing multiple mutually exclusive branches into a single branch. Edit mode just does’t seem to handle it cleanly. However it generally will work fine when activated. E.g. only of of these nodes will run per item, but I have a lot of common logic down stream that I didn’t want to duplicate. I’ll often just disconnect one of them while testing individual nodes downstream. However this will sometimes cause all previous nodes to trigger everytime, instead of just the individual node
Hope that helps
Thanks for the quick response!
Okay, seems like I’m not too far off in my thinking. I’ll definitely look at the pinning data part. Would be amazing to be able to take the output of a subflow and temporarily ‘save’ it in a ‘json node’.
Wish me luck
Seperating the workflow into subworkflows is the only way to really split up the different parts of them and test seperatly.
In the Workflow trigger node you can pin the data that should be coming in, this seems to work perfectly for me in most cases.
@pemontto has some great pointers.
The 5th point can of course be used as he shows, but you have to be very careful about it.
With small (sub)workflows your risk is reduced and it is an awesome way of getting less duplication of nodes.