Optimizing Iteration Through Supabase + Scraping + AI Classification

Hey everyone,
I’m working on an n8n workflow that pulls URLs from Supabase, performs web scraping, then AI classification, and finally stores the results back into the database.

So far, my setup works like this:

  1. The workflow is triggered manually or via a webhook.
  2. An HTTP Request sends the id to a webhook, which retrieves the corresponding URL from Supabase.
  3. After processing, the id is incremented by 1, and another webhook call moves to the next row.
  4. This continues until all URLs have been processed.

This works, but it’s not very efficient – too many unnecessary webhooks and HTTP Requests, it’s completely sequential, and it’s hard to scale with a large number of records. Also, if a URL is invalid, the workflow might get stuck.

I’m considering switching to Loop Over Items, which could make the workflow more streamlined and faster.

New approach:

  1. Supabase Select → Instead of fetching one id at a time, retrieve all unprocessed URLs at once.
  2. Loop Over Items (batch size = 1) → Iterate directly within n8n, no need for webhooks.
  3. Scraping + AI Classification → Analyze the content.
  4. Supabase Insert → Save the results back to the database.
  5. If node → If a URL fails, log it and continue.
  6. Loop continues until all URLs are processed.

I believe this might be more efficient – fewer unnecessary requests, no need for manual ID incrementation, and better scalability. It would also handle failing URLs better by logging them instead of stopping the workflow.

My questions:

  1. Are there any downsides to this approach that I should be aware of?
  2. Is there an even better way to iterate through Supabase records in n8n?
  3. What’s the best way to handle scraping failures to prevent the workflow from stopping due to a few broken URLs?

Thanks in advance for any suggestions or insights!

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Instead of Looping over the items, could you do Batch Processing with Parallel Execution?

1 Like

That’s a good idea. I’ll look into batch processing with parallel execution. The only challenge I see is that my web scraping API can only handle 3(or 5) requests at a time. I need to figure out how to work around that limit.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.