Looping multiple items - High CPU usage when sending data to Supabase using HTTP Request

Hello everyone,

I have a daily workflow that fetches around 11,000 items from Google BigQuery and sends this data to Supabase. Currently, I’m using the HTTP Request node with built-in batching (200 items per batch) for sending the data.

However, I’ve noticed that during execution, my VPS CPU easily hits 100%, causing slowdowns and instability.

I’d appreciate your insights on the following:

  1. Would it be more efficient to use the standard n8n loop (such as the “Split in Batches” node) with short intervals between requests instead of relying on the internal batching of the HTTP Request node?
  2. Has anyone experienced similar issues or know why this high CPU usage occurs?
  3. Is there a recommended configuration for the workflow to reduce CPU usage?

Thanks for your help!

  1. I am unsure in your case.
  2. Yes, sometimes. We solved cases having to handle millions of rows of data.
    It is typically related to poor logic or low VPS specs. There are some subtle ways of improving depending on the circumstances.

Could you provide your workflow by doing ctl a, ctl c, then click the ‘</>’ button in a forums reply, and pasting into the field provided?

And could you provide the VPS specs?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.