How to implement concurrency in N8N, such as calling API concurrently


  • I am working with n8n to automate a workflow that involves processing a large volume of data.
  • My workflow involves reading 100 rows of data from an upstream database.
  • Each row of data needs to be processed by making an API call.
  • The result from the API call is then written back to the database.


  • I want to optimize the workflow to execute API calls and database writes concurrently.
  • Specifically, I am aiming for 10 concurrent executions to improve efficiency and reduce processing time.

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Welcome to the community @xuxiang!

That is possible with the HTTP Request node by adding the opion “Batching”:
Screenshot from 2023-12-30 21-38-17

Hi @xuxiang,

If your need is limited with API requests, jan’s advise will work perfectly. Anyway, if you want more flexibility, I suggest you to use RabbitMQ. After getting the data that your flow will do the work with it, push messages (items) to queue and process them in another workflow with RabbitMQ trigger. Combine the best options with your case such as limit parallel processing limit to X, ack when flow is done etc.

Hope it helps.

1 Like

My typical scenario involves receiving tasks via HTTP from upstream, performing data integration processing, and then needing something like the downstream LLM to handle about 5 concurrent processes. I haven’t seen concurrency configurations in Basic LLM Chain or OpenAI Chat Model scenarios.

The earlier suggestion to manage concurrency through RabbitMQ is interesting. Are there specific examples of this, and how can different tasks be limited to use different levels of concurrency? This approach would also require the introduction of additional external components. Is n8n considering incorporating concurrency management capabilities in the future? This would be very necessary for many application scenarios, such as API calls (for example, llm).

@jan @samaritan

Is this solution similar to starting the workflow directly from webhook?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.