I’m working on an n8n automation that takes around 8,300 rows from a Google Sheet. The workflow compares keyword research data with destination URLs and uses an LLM (Gemini) to provide an analysis and a score to determine the match between each keyword and URL.
The problem is that processing all the rows takes a very long time—around 12 hours. I’m looking for ways to speed this up.
Has anyone faced a similar situation or have suggestions on how to optimize LLM calls in n8n for large datasets? For example, batching requests, parallel processing, or any other techniques that could significantly reduce the total runtime.
To handle a large number of LLM requests, you need to start by setting a plan.
For example, if you’re using Gemini, are you on the free plan or a paid one? Which tier?
Once you’ve decided, check the following rate limits:
Requests per minute (RPM)
Tokens per minute (input) (TPM)
Requests per day (RPD)
You can find the details here:
Let’s say you’re on Tier 1 and planning to use Gemini 2.5 Flash so your limits will be 1000 RPM and 10,000 RPD.
You can try using Batch Processing in the Basic LLM Chain node, and carefully test both the batch size and the delay between batches.
If you still need more speed, you can wrap the nodes after the “loop over items” in a Subworkflow, and turn off the Wait For Sub-Workflow Completion option and probably use time based wait node..
This way, you’ll send batches without waiting, but be careful: doing so may quickly hit your rate limits if you don’t tune the batch size and delay properly.
@mohamed3nan Ok, and you mean for every batch I connect a LLM gemini ? so for instance if I have 50 batch I should connect each of my batch to a gemini node ?
You can speed things up by increasing the batch size. Try 20–50 items per LLM call instead. Gemini can easily handle larger batches in one go.
Also, instead of updating rows one by one, collect all results into an array and write them at once using “Update Many” or CSV Upload → Replace Sheet.
Finally, try running your batches in parallel using: SplitInBatches → Execute Workflow (fire & forget mode). Each subworkflow will process a different batch at the same time.
Here in this example, the Loop Over Items node has a batch size of 100, and in the Basic LLM Chain node the Batch Size is 10 with a 1-second Delay Between Batches,
As I explained earlier, your account limits are the starting point for restructuring your workflow and for setting these variables correctly..
If you set the variables correctly according to your limits and find that you’re not hitting them but still need more speed, at that point you can wrap everything inside the loop in a subworkflow..