Puppeteer in n8n V2 doesn't work anymore

Describe the problem/error/question

Hi! I’m trying to use Puppeteer in n8n 2.3.2+ but facing issues with the Docker Hardened Images (no apk available).
Working setup on n8n 1.123.5:

  • Install Chromium via apk
  • Install Puppeteer via npm
  • Works perfectly

On n8n 2.3.2+:

  • No package manager available (hardened image)
  • Can’t install system dependencies for Chromium
  • Multi-stage build attempts fail with library incompatibilities

Question: What’s the recommended way to use Puppeteer with n8n 2.x hardened images? Is there official documentation or a workaround?
Thanks!

Please share your workflow

Information on your n8n setup

  • n8n version: 2.3.2
  • Database (default: SQLite): Postgres
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: ubuntu

The workflow with a simple input:

Hi @theo, welcome!
Since n8n v2, it has become more security-focused, so you need to make a few adjustments,

Take a look here:

You need to set up a task runner:

Hi @mohamed3nan,

Thanks for your help! I’ve made progress but hit a blocker with the task runner approach.

What I’ve tested:

  1. Works: Restored apk in main n8n image (your method works)

  2. Works: Set up external task runners architecture (n8n 2.3.2 + separate runner container)

  3. Works: Built custom runner image with Chromium + puppeteer

  4. Blocker: The official n8nio/runners:2.3.2 image auto-launches BOTH JavaScript AND Python runners at startup, even when /etc/n8n-task-runners.json only defines JavaScript

Error:

"does not contain requested runner type: python"

The launcher expects both runners to be configured, but I only need JavaScript. Attempts to configure both fail with missing dependencies or path issues.

Question: Has anyone successfully run a custom JavaScript-only task runner with n8n 2.x? Should I build the runner from scratch instead of extending n8nio/runners:2.3.2?

Current setup: Docker on Coolify, external runners mode enabled, WebSocket broker working.

version: '3.8'

services:
  n8n:
    image: n8nio/n8n:2.3.2
    environment:
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_BROKER_PORT=5679
      - N8N_RUNNERS_AUTH_TOKEN=****
    ports:
      - "5678:5678"
      - "5679:5679"

  n8n-task-runner:
    build: ./task-runner
    environment:
      - N8N_RUNNERS_BROKER_URL=ws://n8n:5679
      - N8N_RUNNERS_AUTH_TOKEN=****
    depends_on:
      - n8n

Dockerfile -task-runner):

FROM alpine:3.23 AS builder
RUN apk add --no-cache chromium

FROM n8nio/runners:2.3.2
USER root
COPY --from=builder /usr/bin/chromium* /usr/bin/
COPY --from=builder /usr/lib/chromium /usr/lib/chromium
COPY --from=builder /usr/lib/*.so* /usr/lib/
COPY --from=builder /lib/*.so* /lib/

WORKDIR /opt/runners/task-runner-javascript
RUN corepack pnpm add [email protected]

RUN echo '{"task-runners":[{"runner-type":"javascript"}]}' > /etc/n8n-task-runners.json

USER runner

Thanks! Theo

1 Like

I haven’t tried that myself tbh, but have you tried removing the python object from the "task-runners" array in the n8n-task-runners.json file?

Yes @mohamed3nan , that’s exactly what I tried first!
My /etc/n8n-task-runners.json only contains JavaScript:

{"task-runners":[{"runner-type":"javascript"}]}
```

But the issue is that the `n8nio/runners:2.3.2` image has a **launcher binary** (`n8n-task-runner-launcher`) that automatically tries to start BOTH JavaScript AND Python runners at container startup, regardless of what's in the JSON config.

So even though my JSON only defines JavaScript, the launcher still expects Python to be configured and fails with:
```
"does not contain requested runner type: python"

I’ve also tried:

  • Defining both JS + Python in JSON without ports: “health-check-server-port is required with multiple runners”

  • Defining both with ports: “failed to chdir into configured dir”

  • Overriding the container command: breaks the launcher entirely

It seems like the launcher is hardcoded to expect both runner types. That’s why I’m wondering if building from scratch (without using n8nio/runners:2.3.2) is the way to go.

Yeah, I just tested it myself and I’m able to reproduce that error in the runner,

Honestly, idk if there is an “official” solution/workaround/docs for this yet,
I use both JS and Python and hadn’t tried running just one before..

However, I think I found a workaround rn after some testing,
I managed to get it working by tricking the launcher :smiley: I edited the python entry configuration but pointed it to the exact same settings as the JavaScript runner (same command, args, etc.)

Can you try that trick?

Hi @mohamed3nan

Thanks for the trick! Unfortunately, it didn’t solve the issue. Here’s what we’ve tested:

What worked:

  • :white_check_mark: Duplicating JS config as Python to satisfy the launcher (your workaround)

  • :white_check_mark: Chromium installation via apk restore method

  • :white_check_mark: External task runners setup with WebSocket broker

  • :white_check_mark: Chromium runs successfully when tested directly in the container (chromium-browser --headless --dump-dom works)

What’s blocking: Both Puppeteer (v21.11.0 & v22.15.0) and Playwright (v1.40.0) fail with the same error in the task runner:

TypeError: Cannot assign to read only property 'name' of object 'Error'

This happens because both libraries try to modify Error.prototype.name, which is blocked by the task runner’s strict mode.

What we tried:

  • Setting N8N_RUNNERS_ALLOW_PROTOTYPE_MUTATION: "true" in env-overrides → no effect

  • Downgrading Puppeteer versions → same error

  • Switching to Playwright → same error

  • Patching prototype in user code → executes too late

The interesting part: Puppeteer/Playwright work perfectly when executed directly with node in the container, but fail when executed through the n8n task runner.

Do you think there’s a solution for headless browser libraries in n8n 2.x task runners? Or perhaps should we open a GitHub issue about N8N_RUNNERS_ALLOW_PROTOTYPE_MUTATION not working as expected, if this makes sense? What do you think?

Let’s remind the setup:

  • n8n 2.3.2 with external task runners

  • Custom runner image with Chromium from Alpine 3.23

  • Docker (though Coolify)

Thanks for your help!

1 Like

Hmm, honestly I can’t help more than this at the moment, but I’ll still try to test something in my mind and see, if I find a solution, I’ll post it here,

For now, I’ll tag friends @Wouter_Nigrini @solomon @salmansolutions maybe if someone has information or solution about this, they can help,

Also, you could definitely open a GitHub issue, it would be better to get a response directly from the team; it might end up being easier than all of this…

2 Likes

Hi @theo,

The only way around all of this is to create your own custom n8n image with Chromium pre installed..

Lemme try something on my side

Edit:
Ok so in essence, the solution below still uses the hardened n8n images so you dont have any worry about the security behind it.

This works in a 2 step process from the Dockerfile responsible for building your custom n8n image.

Step 1: Use Alpine to build a temp container for installing chromium. This is important so we can copy the chromium libs and executables over to the hardened image.

Step 2: Use the original latest version of the n8n hardened image and copy the chromium files over to it. Use the custom entrypoint file to make sure chromium and n8n can run together in the new image.

Ok here’s how to make it work:

You’ll need to create a folder somewhere and then create the following files. Once done you can run the following commands to build the new image locally and then start up n8n in queue mode with workers and runners. This was based on the n8n community node found below. I simply followed the docker route and made the necessary fixes to the Dockerfile etc to make it work. Feel free to modify the docker compose to your needs. The only reason I used a different port (3456) is because I already have other n8n instances on my local machine.

Commands:

Rebuilds the image and starts containers

docker-compose up --build -d

Starts containers (uses cached image if available)

docker-compose up -d

Just builds the image without starting

docker-compose build

Files to create:

.puppeteerrc.cjs

// Puppeteer configuration for Docker
// This file will be copied to /home/node/.puppeteerrc.cjs in the container
const { join } = require('path');

/**
 * @type {import("puppeteer").Configuration}
 */
module.exports = {
  executablePath: process.env.PUPPETEER_EXECUTABLE_PATH || '/usr/bin/chromium-browser',
  args: [
    '--no-sandbox',
    '--disable-setuid-sandbox',
    '--disable-dev-shm-usage',
    '--disable-gpu',
    '--disable-software-rasterizer',
    '--disable-extensions',
    '--no-first-run',
    '--no-zygote',
    '--single-process'
  ]
};

chromium-wrapper.sh

#!/bin/sh
# Wrapper script to launch Chromium with Docker-friendly flags
exec /usr/lib/chromium/chromium \
  --no-sandbox \
  --disable-setuid-sandbox \
  --disable-dev-shm-usage \
  --disable-gpu \
  --disable-software-rasterizer \
  --no-first-run \
  --no-zygote \
  --single-process \
  "$@"

docker-compose.yml

services:
  n8n-db:
    image: postgres:16.1
    restart: always
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_PASSWORD=n8n
      - POSTGRES_USER=n8n
    volumes:
      - postgres-data-puppet:/var/lib/postgresql/data

  n8n-redis:
    image: redis:7-alpine
    restart: always
    volumes:
      - redis-data-puppet:/data

  n8n-main:
    build: .
    image: n8n-puppeteer
    restart: always
    depends_on:
      - n8n-db
      - n8n-redis
    volumes:
      - n8n-data-puppet:/home/node/.n8n
    ports:
      - 3456:5678
      # - 5680:5680
    # Add shared memory size for Chromium
    shm_size: "2gb"
    environment:
      # Puppeteer/Chromium configuration for Docker
      - PUPPETEER_ARGS=--no-sandbox --disable-setuid-sandbox --disable-dev-shm-usage --disable-gpu
      - WEBHOOK_URL=http://localhost:3456
      - NODE_ENV=production
      - N8N_HOST=localhost
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - N8N_SECURE_COOKIE=true
      - EXECUTIONS_MODE=queue
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      # Security settings
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true
      - N8N_SKIP_AUTH_ON_OAUTH_CALLBACK=false
      # File access restriction
      - N8N_RESTRICT_FILE_ACCESS_TO=/home/node/.n8n-files
      # Binary data configuration (filesystem mode for regular mode)
      - N8N_DEFAULT_BINARY_DATA_MODE=filesystem
      - NODE_FUNCTION_ALLOW_BUILTIN=crypto
      - OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_SCHEMA=n8n
      - DB_POSTGRESDB_PASSWORD=n8n
      - DB_POSTGRESDB_POOL_SIZE=40
      - DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
      # Queue mode configuration
      - QUEUE_BULL_REDIS_HOST=n8n-redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_DB=0
      #  - N8N_LOG_LEVEL=debug
      - NODES_EXCLUDE="[n8n-nodes-base.localFileTrigger]"
      - N8N_ENCRYPTION_KEY=your-encryption-key-change-this

  n8n-worker:
    build: .
    image: n8n-puppeteer
    restart: always
    command: worker --concurrency=6
    depends_on:
      - n8n-db
      - n8n-redis
      - n8n-worker-task-runner
    # volumes:
    #   - n8n-data-puppet:/home/node/.n8n
    # Add shared memory size for Chromium
    shm_size: "2gb"
    environment:
      # Puppeteer/Chromium configuration for Docker
      - PUPPETEER_ARGS=--no-sandbox --disable-setuid-sandbox --disable-dev-shm-usage --disable-gpu
      - EXECUTIONS_MODE=queue
      - WEBHOOK_URL=http://localhost:3456
      - N8N_HOST=localhost
      - N8N_SKIP_DB_INIT=true
      # Task runner configuration for v2 (external mode)
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      - N8N_PROCESS=worker
      # Security settings
      - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false
      - N8N_BLOCK_ENV_ACCESS_IN_NODE=true
      # File access restriction
      - N8N_RESTRICT_FILE_ACCESS_TO=/home/node/.n8n-files
      - NODE_FUNCTION_ALLOW_BUILTIN=crypto
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_HOST=n8n-db
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_SCHEMA=n8n
      - DB_POSTGRESDB_PASSWORD=n8n
      - DB_POSTGRESDB_POOL_SIZE=40
      - DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
      # Queue mode configuration
      - QUEUE_BULL_REDIS_HOST=n8n-redis
      - QUEUE_BULL_REDIS_PORT=6379
      - QUEUE_BULL_REDIS_DB=0
      # - N8N_LOG_LEVEL=debug
      - NODES_EXCLUDE="[n8n-nodes-base.localFileTrigger]"
      - N8N_ENCRYPTION_KEY=your-encryption-key-change-this

  # Task runner for n8n-worker with Python support for v2
  n8n-worker-task-runner:
    image: n8nio/runners
    restart: always
    depends_on:
      - n8n-db
      - n8n-redis
    environment:
      # Task runner configuration
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_TASK_BROKER_URI=http://n8n-worker:5679
      - N8N_RUNNERS_AUTH_TOKEN=your-secure-auth-token-change-this
      # Enable Python and JavaScript support
      - N8N_RUNNERS_ENABLED_TASK_TYPES=javascript,python
      # Auto shutdown after 15 seconds of inactivity
      - N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15
    # volumes:
    #   # Shared volume for file access if needed
    #   - n8n-data-puppet:/home/node/.n8n

volumes:
  postgres-data-puppet:
  redis-data-puppet:
  n8n-data-puppet:

docker-custom-entrypoint.sh

#!/bin/sh

print_banner() {
    echo "----------------------------------------"
    echo "n8n Puppeteer Node - Environment Details"
    echo "----------------------------------------"
    echo "Node.js version: $(node -v)"
    echo "n8n version: $(n8n --version)"

    # Get Chromium version specifically from the path we're using for Puppeteer
    CHROME_VERSION=$("$PUPPETEER_EXECUTABLE_PATH" --version 2>/dev/null || echo "Chromium not found")
    echo "Chromium version: $CHROME_VERSION"

    # Get Puppeteer version if installed
    PUPPETEER_PATH="/opt/n8n-custom-nodes/node_modules/n8n-nodes-puppeteer"
    if [ -f "$PUPPETEER_PATH/package.json" ]; then
        PUPPETEER_VERSION=$(node -p "require('$PUPPETEER_PATH/package.json').version")
        echo "n8n-nodes-puppeteer version: $PUPPETEER_VERSION"

        # Try to resolve puppeteer package from the n8n-nodes-puppeteer directory
        CORE_PUPPETEER_VERSION=$(cd "$PUPPETEER_PATH" && node -e "try { const version = require('puppeteer/package.json').version; console.log(version); } catch(e) { console.log('not found'); }")
        echo "Puppeteer core version: $CORE_PUPPETEER_VERSION"
    else
        echo "n8n-nodes-puppeteer: not installed"
    fi

    echo "Puppeteer executable path: $PUPPETEER_EXECUTABLE_PATH"
    echo "----------------------------------------"
}

# Add custom nodes to the NODE_PATH
if [ -n "$N8N_CUSTOM_EXTENSIONS" ]; then
    export N8N_CUSTOM_EXTENSIONS="/opt/n8n-custom-nodes:${N8N_CUSTOM_EXTENSIONS}"
else
    export N8N_CUSTOM_EXTENSIONS="/opt/n8n-custom-nodes"
fi

# Set default Puppeteer args for Docker if not already set
if [ -z "$PUPPETEER_ARGS" ]; then
    export PUPPETEER_ARGS="--no-sandbox --disable-setuid-sandbox --disable-dev-shm-usage --disable-gpu"
fi

print_banner

echo "Initializing n8n process"

# Detect if running in a container
if [ -f "/.dockerenv" ]; then
    echo "Puppeteer node: Container detected via .dockerenv file"
fi

# Execute the original n8n entrypoint script
exec /docker-entrypoint.sh "$@"

Dockerfile

# Stage 1: Install Chromium and dependencies on a standard Alpine image
FROM alpine:3.22 AS chromium-installer

RUN apk add --no-cache \
    chromium \
    nss \
    glib \
    freetype \
    freetype-dev \
    harfbuzz \
    ca-certificates \
    ttf-freefont \
    udev \
    ttf-liberation \
    font-noto-emoji

# Stage 2: Copy Chromium to n8n image
FROM docker.n8n.io/n8nio/n8n

USER root

# Copy Chromium and all its dependencies from the Alpine image
COPY --from=chromium-installer /usr/lib/chromium/ /usr/lib/chromium/
COPY --from=chromium-installer /usr/bin/chromium-browser /usr/bin/chromium-browser

# Copy ALL libraries from Alpine (including subdirectories) to ensure all dependencies are available
COPY --from=chromium-installer /usr/lib/ /usr/lib/
COPY --from=chromium-installer /lib/ /lib/

# Copy fonts
COPY --from=chromium-installer /usr/share/fonts/ /usr/share/fonts/

# Copy Chromium wrapper script that adds Docker-friendly flags
COPY chromium-wrapper.sh /usr/bin/chromium-wrapper
RUN chmod +x /usr/bin/chromium-wrapper

# Create symlink for chromium (required by chromium-browser wrapper) - point to wrapper
RUN ln -s /usr/bin/chromium-wrapper /usr/bin/chromium

# Tell Puppeteer to use installed Chrome wrapper instead of downloading it
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true \
    PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser

# Install n8n-nodes-puppeteer in a permanent location
RUN mkdir -p /opt/n8n-custom-nodes && \
    cd /opt/n8n-custom-nodes && \
    npm install n8n-nodes-puppeteer && \
    chown -R node:node /opt/n8n-custom-nodes

# Copy our custom entrypoint
COPY docker-custom-entrypoint.sh /docker-custom-entrypoint.sh
RUN chmod +x /docker-custom-entrypoint.sh && \
    chown node:node /docker-custom-entrypoint.sh

# Copy Puppeteer config for Docker-specific launch args
COPY .puppeteerrc.cjs /home/node/.puppeteerrc.cjs
COPY .puppeteerrc.cjs /opt/n8n-custom-nodes/.puppeteerrc.cjs
RUN chown node:node /home/node/.puppeteerrc.cjs /opt/n8n-custom-nodes/.puppeteerrc.cjs

USER node

ENTRYPOINT ["/docker-custom-entrypoint.sh"]

Link to community node:

https://www.npmjs.com/package/n8n-nodes-puppeteer

Once done, you should see the puppeteer and chromium version reported in the logs:

Then from a new workflow, you can search for the Puppeteer node and try load a webpage:

I see your original post you wanted to capture a screenshot. You can use the Get Screenshot operation for this:

Alternatively, you can install Browserless.

I’ve been using it for a while, and it seems to be quite reliable. No problems so far.

Thank you so much @Wouter_Nigrini

It looks great, though I took the fix / solution suggested by the n8n team on this ticket and it works. There was definitely an issue around the “N8N_RUNNERS_ALLOW_PROTOTYPE_MUTATION” flag with the n8n v2 and they will adjust the documentation :slight_smile:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.