Automation Running too long

<!-- Hey! The fastest way to find solutions is by using the 🔎 search function at the upper right.
If your question hasn't been asked before, please follow the template below. Skip the questions that are not relevant to you. -->

Describe the problem/error/question

What happened is that since 12/15 I’ve used all the Apify Credits I’ve had with the actors I am using for my scraping automation. So I used another account in apify to get another credits so I can continue with my daily scraping. Ever since I had changed it and am trying to scrape, it is taking too long. As you can see the hours, this has taken like hours and not finishing the execution unlike before when I can just run it daily and it would take only minutes to finish.

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hi, I actually had this exact same issue before, so I think I know what’s happening.

The problem isn’t really about changing your Apify account - it’s most likely your n8n database setup. If you’re using the default SQLite database, that’s the culprit. I went through the same pain where my workflows that used to finish in minutes suddenly started taking hours to complete.

SQLite just can’t handle concurrent executions well, especially when you’re running scraping workflows with Apify actors. It becomes a massive bottleneck because it locks the entire database during writes, so when you have multiple executions or heavy data processing, everything starts queuing up and crawling.​

I switched to PostgreSQL and it was like night and day. Everything went back to running smoothly because PostgreSQL is designed for concurrent operations and can handle the load properly. It’s pretty much the recommended setup for production or if you’re running workflows regularly.​

The migration isn’t super complicated - you just need to set up a PostgreSQL database and update your n8n environment variables. Fair warning though, you’ll need to rebuild your workflows since the data doesn’t migrate automatically.

Also check if your execution history is bloated in the current SQLite database - that can slow things down too.

Hello @wandinie Just want to confirm. I am the one who posted this question above but using a different account. I am new to database(Sqlite). And I am currently researching about it and how I can access the database of my workflow in SQLite, sorry as i am confused since the steps that I’ve been seeing is not applicable in the cloud veersion of n8n. I cannot see the Database part in the setting etc. Thanks for your help

Hello @H_Francisco,

I see the confusion now. Yeah, if you’re using n8n cloud, you won’t be able to access or change the database settings. The cloud version is fully managed by n8n, so you don’t have control over whether it’s using SQLite or PostgreSQL - they handle all that backend stuff for you. That’s actually the limitation with the cloud version.

To fix this performance issue properly, you’ll need to self-host n8n instead so you have full control over your database setup.I’d recommend using Easypanel for this. You can deploy n8n directly on Easypanel and it makes the whole process way easier.

In Easypanel, you’ll first deploy PostgreSQL from their templates, then deploy n8n and configure it to connect to that PostgreSQL database. It’s much simpler than setting everything up manually because the templates handle most of the configuration for you.

But first - are you familiar with VPS hosting? You’ll need a VPS to run Easypanel on. If you haven’t used VPS before, it’s basically renting a server where you can host your own applications. Once you have the VPS, you install Easypanel on it, then deploy PostgreSQL and n8n.With this self-hosted setup using PostgreSQL, your scraping workflows will run much smoother without those performance bottlenecks.

@wandinie Sorry not familiar. But is using Docker also an option? And also is there like a way I can clean or flush the amount of data that is in my cloud automation. Like a code I can use in my workflow etc. Thank you so much

Docker is absolutely an option, and in fact it’s the most common path when people move from n8n Cloud to self-hosting. Running n8n in Docker with PostgreSQL gives you full control over execution behavior, database performance, and cleanup—things you simply can’t manage in the cloud version.

That said, if you stay on n8n Cloud, there is no direct way to “flush” or manipulate the underlying database via code or workflows. You don’t get access to SQLite/Postgres or execution tables. What you can do is reduce load indirectly:

  • Limit execution history retention in workflow settings

  • Disable saving execution data for successful runs

  • Break large scraping workflows into smaller batches

  • Reduce concurrency where possible

These won’t fix a true bottleneck, but they help prevent runaway execution times.

If you self-host with Docker, cleanup becomes straightforward. You can:

  • Prune execution data using built-in n8n CLI commands

  • Schedule cron jobs to delete old executions

  • Tune PostgreSQL (indexes, vacuuming) for long-running workflows

In production setups I’ve worked on at LeadsFlex, moving scraping or enrichment workflows to Docker + PostgreSQL was usually the turning point. SQLite works for learning and low volume, but once you introduce concurrency (Apify actors, parallel HTTP calls), it becomes a limiting factor very quickly.

If you’re new to VPS concepts, Docker actually simplifies things—most setups boil down to a docker-compose.yml with n8n and Postgres defined. You don’t need deep database knowledge to get a stable, scalable setup running.

In short:

  • Cloud → optimize execution settings only

  • Docker + PostgreSQL → real performance and control

That decision mainly depends on whether this is experimental learning or something you expect to run regularly.