I’m currently working on building AI Agent workflows using n8n, and I’ve encountered a challenge related to execution speed.
In my setup, I’m using the AI Agent Tool node with three tools: CPUMetricsCurrent, DiskUsageMetrics, and MemoryUsageMetrics. However, I’ve observed that these tools are being executed sequentially, which significantly reduces the overall performance and speed of the workflow.
To improve this, I attempted the following:
Assigned all three tools to a single AI Agent node — they still executed one after another.
Created separate AI Agent nodes for each tool and triggered them from the same parent node — but again, the execution happened sequentially.
As far as I know, the easiest method to simulate parallel behavior is by using a sub-workflow and turning off the ‘Wait for Sub-Workflow Completion’ option…
That’s the general approach, but you’ll need to design your workflow logic accordingly…
This is a challenge in n8n. As an orchestration tool, it is probably designed to assume whatever actions it calls will be either quickly-completed, or offer an asynchronous-callback option.
For the latter, there is an expression variable named $execution.resumeUrl that you can get the value of at any time while the workflow is running. So, you can grab that when you call a service that will use a callback URL. Then, to make the workflow stop and listen for the callback, add a Wait node and set it to Resume: On Webhook Call
Add multiple, parallel callouts to the picture and it gets more complicated quick. Here’s one approach to handling that, but even that has some issues related to the timing of callbacks received.
The BEST way to do this sort of thing, IMO, is with an external service that can accept and track multiple, concurrent tasks, wait for a specified “completion” condition, and then, only once, return control to the workflow. However, implementing that type of service might be more challenging than it’s worth.
Good luck finding something that suits your particular “parallel execution” scenario.
we’ve recently added batching for Agent and Basic LLM Chain, this would allow you to run these nodes in parallel for the input items. You can enable it by configuring “Batch Processing” in AI Agent options.
Can you please explain how I can trigger multiple sub workflows/nodes attached as tool in AI agent node in Parallel using batch processing?
What configuration do I need to made.
I tried to add batch processing option but it’s still call all nodes sequentially