The idea is:
Add the batching functionality currently available in the HTTP Request node (batch size and interval controls) as a standard option across all n8n nodes.
My use case:
I constantly find myself creating unnecessarily complex workflows just to implement batching across different node types. The current approach forces me to:
- Add a Loop Over Items (Split in Batches) node before any operation that needs batching
- Insert Wait nodes to control processing speed
- Create conditional logic to manage loop completion
- Use Code nodes to merge fragmented results back together
This pattern must be repeated throughout workflows, creating a maze of nodes that obscures the actual business logic and makes maintenance difficult. What should be simple operations become complex multi-node structures just to implement basic batching functionality.
The lack of consistent batching options across nodes forces users to build these complex workarounds even for simple processes that need controlled execution rates.
I think it would be beneficial to add this because:
- Reduces workflow complexity: Currently creating batched processing requires multiple nodes (Split in Batches + Wait + IF)7. Universal batching would simplify workflows and make them more maintainable.’
- Prevents rate limiting: Many APIs impose rate limits that require controlled request pacing. The HTTP Request node already solves this, but other API nodes lack this functionality24.
- Improves performance: Processing large datasets in controlled batches prevents memory issues and resource exhaustion.
Any resources to support this?
- The existing HTTP Request node implementation shows how batching can work effectively 8
- Users currently employ complex workarounds for batch processing large datasets 3 6
Are you willing to work on this?
Not have the skills