N8n-autoscaling updated for v2.0. Includes Queue Mode, Worker Scaling, Runner Scaling, Cloudflare, etc

Hey Everyone!

Major update on the n8n-autoscaling build - it’s now fully compatible with n8n 2.0!

https://github.com/conor-is-my-name/n8n-autoscaling

What is this n8n-autoscaling?

  • It’s an extra performant version of n8n that runs in docker and allows for more simultaneous executions than the base build. Hundreds or more simultaneously depending on your hardware.

  • Includes Puppeteer, Postgres, FFmpeg, and Redis already installed for power users.

  • *Relatively* easy to install - my goal is that it’s no harder to install than the regular version (but the Cloudflare security did add some extra steps).

  • Queue mode built in, web hooks set up for you, secure, automatically adds more workers, this build has all the pro level features.

Who is this n8n build for?

  • Everyone from beginners to experts

  • Users who think they will ever need to run more than 10 executions at the same time

Simply put, this is a FULL FEATURED pro level build, with as simple as it can be installation. You just need to configure your passwords and secrets, that’s it, everything else is done.

Why use this instead of the base n8n Docker image?

Whether you’re just getting started or running n8n in production, this build gives you everything you need out of the box:

For beginners:

  • One command install - just docker compose up and you’re running.

  • Cloudflare tunnel pre-configured - secure HTTPS access without messing with ports or SSL certs

  • No Kubernetes, no complex orchestration - just Docker Compose

  • Sensible defaults that just work

  • Step-by-step setup guide in the original post

For power users:

  • Queue mode enabled - offload executions to workers instead of blocking the main instance

  • Auto-scaling workers - handles hundreds of simultaneous executions without manual intervention

  • Puppeteer/Chromium built-in - web scraping from Code nodes, way more reliable than community nodes

  • Postgres with pgvector - ready for AI/embeddings workflows

  • Redis - proper job queue for production workloads

  • FFmpeg, GraphicsMagick, Git - media processing and version control built in

  • External npm packages - AJV, moment, and easy to add your own

The big benefits over base n8n:

Feature Base n8n This Build
Simultaneous executions 10 Hundres+
Worker scaling None/Manual Automatic
Puppeteer/Chromium DIY setup Pre-configured
Queue mode Manual config Ready to go
HTTPS/Security DIY Cloudflare tunnel included
Task runners (2.0) Manual setup Pre-configured with sidecars

Tested with hundreds of simultaneous executions on an 8-core 16GB VPS. The autoscaler watches your Redis queue and spins up/down workers automatically based on load.

What broke in n8n 2.0?

n8n 2.0 introduced a major breaking change - task runners (the things that execute your Code nodes) are now separate Docker containers running in “external mode.” This broke Puppeteer, broke libraries like AJV, and required a completely different architecture.

What’s new in this update:

  • External task runners - New Dockerfile.runner builds a custom task runner image with Chromium/Puppeteer pre-installed

  • Sidecar architecture - Each worker now gets its own task runner container (1:1 ratio)

  • Autoscaler updated - Scales workers AND their task runners together automatically

  • Puppeteer working again - Had to dig into the n8n source code for this one. The sandbox freezes prototypes by default which breaks puppeteer-core. Fixed with a custom n8n-task-runners.json config file.

  • AJV and other packages - Libraries that use new Function() now work (the default sandbox blocks this)

The technical details (for those curious):

The n8n task runner sandbox has two security measures that break common packages:

  1. --disallow-code-generation-from-strings - blocks new Function() which AJV and other software packages need

  2. Prototype freezing via Object.freeze() - breaks puppeteer’s error handling

The fix is a custom n8n-task-runners.json config that gets copied into the runner image. You can also use this file to add your own npm packages.

Adding your own npm packages:

  1. Edit Dockerfile.runner - add your package to the pnpm install

  2. Edit n8n-task-runners.json - add to NODE_FUNCTION_ALLOW_EXTERNAL

  3. Rebuild: docker compose build --no-cache n8n-task-runner n8n-worker-runner

Quick start (same as before):

git clone https://github.com/conor-is-my-name/n8n-autoscaling.git
cd n8n-autoscaling
cp .env.example .env
# Edit .env with your settings
docker network create shark
docker compose up -d --build

To update from the previous version:

docker compose down
git pull
docker compose build --no-cache
docker compose up -d

Everything else from the original post still applies - Cloudflare tunnels, Tailscale, Postgres, Redis, the whole setup. Just pull the latest and rebuild.

2 Likes