Hello everyone,
I have a daily workflow that fetches around 11,000 items from Google BigQuery and sends this data to Supabase. Currently, I’m using the HTTP Request node with built-in batching (200 items per batch) for sending the data.
However, I’ve noticed that during execution, my VPS CPU easily hits 100%, causing slowdowns and instability.
I’d appreciate your insights on the following:
- Would it be more efficient to use the standard n8n loop (such as the “Split in Batches” node) with short intervals between requests instead of relying on the internal batching of the HTTP Request node?
- Has anyone experienced similar issues or know why this high CPU usage occurs?
- Is there a recommended configuration for the workflow to reduce CPU usage?
Thanks for your help!