Split In Batches (Which is Batched - API Requests Or Records?)

Quick question regarding the ‘Split In Batches’ Node

Does is batch incoming API Requests or incoming Records?

e.g in my case where I have massive dynamic requests being received by a Webhook that I’m supposed to Append/Update to Google Sheets

Google Sheets has a max quota of 300 read/write requests per minute. Would I be right in placing ‘300’ as the value in the ‘Split In Batches’ such that it would be;

Webhook > Split In Batches (300) > Wait (1 Minute - loop back to Split)

Or would this just batch 300 records instead?

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi @KevinK

Thanks for posting here and welcome to the community! :raised_hands:

The ‘Split in Batches’ node processes batches of input items. So depending on how you process your webhook payload (you can also use the ‘Split Out’ node if you need to) you can split the records into whatever batch-size works best.

So in short, yes, this should work. Here’s an example POC:

Thanks for the prompt reply @ria !

So in your provided example if I have, say, 6000 API requests coming in at a go I can be able to post to Google Sheets without hitting the 300 requests per minute quota?