Support for Parallel Execution in AI Agent Workflows on n8n

Hi Team,

I’m currently working on building AI Agent workflows using n8n, and I’ve encountered a challenge related to execution speed.

In my setup, I’m using the AI Agent Tool node with three tools: CPUMetricsCurrent, DiskUsageMetrics, and MemoryUsageMetrics. However, I’ve observed that these tools are being executed sequentially, which significantly reduces the overall performance and speed of the workflow.

To improve this, I attempted the following:

  • Assigned all three tools to a single AI Agent node — they still executed one after another.
  • Created separate AI Agent nodes for each tool and triggered them from the same parent node — but again, the execution happened sequentially.

Screenshots of my workflows-

Given this, I’d like to ask:

  1. Does n8n currently support true parallel/concurrent execution of nodes or sub-workflows?
  2. If so, what is the correct way to implement parallel execution, especially when working with multiple AI tools or agents?

Achieving parallel execution is important for our use case, where metrics need to be collected quickly and simultaneously.

Looking forward to your guidance.

1 Like

Good question, I’d like to know as well!

As far as I know, the easiest method to simulate parallel behavior is by using a sub-workflow and turning off the ‘Wait for Sub-Workflow Completion’ option…

That’s the general approach, but you’ll need to design your workflow logic accordingly…

1 Like

This is a challenge in n8n. As an orchestration tool, it is probably designed to assume whatever actions it calls will be either quickly-completed, or offer an asynchronous-callback option.

For the latter, there is an expression variable named $execution.resumeUrl that you can get the value of at any time while the workflow is running. So, you can grab that when you call a service that will use a callback URL. Then, to make the workflow stop and listen for the callback, add a Wait node and set it to Resume: On Webhook Call

Add multiple, parallel callouts to the picture and it gets more complicated quick. Here’s one approach to handling that, but even that has some issues related to the timing of callbacks received.

The BEST way to do this sort of thing, IMO, is with an external service that can accept and track multiple, concurrent tasks, wait for a specified “completion” condition, and then, only once, return control to the workflow. However, implementing that type of service might be more challenging than it’s worth.

Good luck finding something that suits your particular “parallel execution” scenario.

3 Likes

Hi @Amit_Sahu,

we’ve recently added batching for Agent and Basic LLM Chain, this would allow you to run these nodes in parallel for the input items. You can enable it by configuring “Batch Processing” in AI Agent options.

Hi @oleg ,

Can you please explain how I can trigger multiple sub workflows/nodes attached as tool in AI agent node in Parallel using batch processing?
What configuration do I need to made.
I tried to add batch processing option but it’s still call all nodes sequentially

Thanks,
Amit

Hi @Amit_Sahu . did you solve this issue? I’m experiencing the same thing. managed to work around by creating a sub flow per tool and run it in parallel, but now facing the issue of needing to wait for all responses to comeback so i can post process, but it’s a pain in the butt to do so. wondering if you found a solution for this?

Hi @yotomations ,

Yes I was able to solve this,
In AI agent node I attached Google Gemini node as LLM node and user google vertex flash-2.0 as chat model, I was abe to call all the sub workflows in parallel, but agent will wait for execution of all workflows. But it solved the problem as i was able to call all the sub-workflow tools parallely.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.