Is it possible to parallel AI Agents

Screenshot 2025-01-27 at 08.47.42

Is it possible to configure the AI agent in n8n to process items in parallel, rather than sequentially, when handling large datasets (e.g., 200+ items)? I want to eliminate the aggregation step, process each item concurrently, and then group all AI responses to optimize workflow speed.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

hey @Branislav_Brnjos

Yes, you can use the subworkflows approach with the “execute workflow” node.

Some things to watch out for with this approach:

  • Given enough items, running concurrently can quickly exhaust your OpenAI tier rate limits. If you don’t want to implement throttling, aim to be on a tier 2+ account.
  • If you need to aggregate all the responses at the end, you won’t be able to do so synchronously. You’ll have to implement “polling” or “callbacks” to check when everything is done.

Hi @Jim_Le, first of thank you for replying to my question! Also thank you for the explanation that it’s possible, do you maybe know if there are some complete templates or examples available that I can look at for a better understanding?

I tried the setup from the image a week ago, but the execution never stopped running, I am guessing that was because I was not aware of the polling or callbacks check.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.