PostgreSQL connection validated, but n8n falls back to SQLite in queue mode

I’m running n8n in queue mode (with separate webhook and worker processes) using the 8gears/n8n Helm chart on Kubernetes (GKE). I configured PostgreSQL as the database and validated the connection using a custom Node.js script, which works perfectly. However, n8n seems to be falling back to SQLite, as indicated by the log message:

Scaling mode is not officially supported with sqlite. Please use PostgreSQL instead.

I want to ensure that n8n uses PostgreSQL correctly in queue mode and understand why it might be falling back to SQLite despite the successful PostgreSQL connection.


What is the error message (if any)?

The following message appears in the n8n logs:

Scaling mode is not officially supported with sqlite. Please use PostgreSQL instead.

Please share your workflow

This issue is related to the n8n setup and configuration, not a specific workflow. However, here is the script I used to validate the PostgreSQL connection:

const { Client } = require('pg');

// Required environment variables
const requiredEnvVars = [
    'DB_POSTGRESDB_HOST',
    'DB_POSTGRESDB_USER',
    'DB_POSTGRESDB_PASSWORD',
    'DB_POSTGRESDB_DATABASE'
];

// Validate environment variables
function validateEnvVars() {
    const missingVars = requiredEnvVars.filter(envVar => !process.env[envVar]);

    if (missingVars.length > 0) {
        console.error('Error: The following environment variables are missing:');
        missingVars.forEach(envVar => console.error(`- ${envVar}`));
        process.exit(1);
    }

    console.log('All required environment variables are set.');
}

// Test PostgreSQL connection
async function testPostgresConnection() {
    const client = new Client({
        host: process.env.DB_POSTGRESDB_HOST,
        user: process.env.DB_POSTGRESDB_USER,
        password: process.env.DB_POSTGRESDB_PASSWORD,
        database: process.env.DB_POSTGRESDB_DATABASE,
        port: 5432, // Default PostgreSQL port
    });

    try {
        await client.connect();
        console.log('Successfully connected to PostgreSQL.');

        // Execute a simple query to confirm the connection
        const res = await client.query('SELECT version();');
        console.log('PostgreSQL version:', res.rows[0].version);
    } catch (err) {
        console.error('Error connecting to PostgreSQL:', err);
        process.exit(1);
    } finally {
        await client.end();
    }
}

// Run validations
(async () => {
    validateEnvVars();
    await testPostgresConnection();
})();

Share the output returned by the last node

This issue is not related to a specific workflow or node output. However, here is the output from the validation script:

All required environment variables are set.
Successfully connected to PostgreSQL.
PostgreSQL version: PostgreSQL 15.12 on x86_64-pc-linux-gnu, compiled by Debian clang version 12.0.1, 64-bit

Information on your n8n setup

  • n8n version: 1.84.1
  • Database (default: SQLite): PostgreSQL 15.12
  • n8n EXECUTIONS_PROCESS setting (default: own, main): queue mode (webhook and worker)
  • Running n8n via: Kubernetes (GKE) using the 8gears/n8n Helm chart
  • Operating system: Debian-based container

Environment Variables

Here are the environment variables I’m using in the Helm chart:

env:
  - name: DB_TYPE
    value: "postgresdb"
  - name: DB_POSTGRESDB_HOST
    value: "10.105.145.7"
  - name: DB_POSTGRESDB_USER
    value: "n8n"
  - name: DB_POSTGRESDB_PASSWORD
    value: "MYHsdasaddVqcFdfi8"
  - name: DB_POSTGRESDB_DATABASE
    value: "n8n"

Additional Information

  • I confirmed that the PostgreSQL database is accessible and the required tables (execution_entity, workflow_entity, etc.) are created.
  • The issue persists even after restarting n8n and ensuring all environment variables are correctly set.
  • I’m using the 8gears/n8n Helm chart for deployment, which might have specific configurations or limitations.

Questions for the n8n Team

  1. Are there any additional configurations required to enforce PostgreSQL usage in queue mode when using the 8gears/n8n Helm chart?
  2. Could there be a bug causing n8n to fall back to SQLite even when PostgreSQL is correctly configured?
  3. Are there any known issues with PostgreSQL 15.12 and n8n, especially when deployed via Helm?

Useful Links


Next Steps

  1. Post this issue on the n8n forum using the template above.
  2. Add any additional details that might be relevant (e.g., full n8n logs, Helm values file, etc.).
  3. Monitor the responses from the n8n team and provide further information if needed.

Hi @Fabio_Gomes_dos_Sant

You’re seeing the message about SQLite because n8n is falling back to it. This usually happens when not all required DB environment variables are available in the n8n process.

Even if your PostgreSQL connection works in a custom script, n8n runs as separate pods in queue mode—webhook and worker—and both need the full set of DB config variables. If either pod is missing something like DB_TYPE or DB_POSTGRESDB_HOST, n8n will silently default to SQLite.

Check that both webhook and worker have the following env vars set:

DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=10.105.145.7
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=MYHsdasaddVqcFdfi8
DB_POSTGRESDB_DATABASE=n8n

In the 8gears Helm chart, you need to use webhook.extraEnv and worker.extraEnv to pass these to both pods. Setting them only at a global level may not apply them correctly to each container.

After updating, check the logs. You should see “Started with DB type: postgresdb”. If you still see a warning about SQLite or find database.sqlite inside the pod, the fallback is still happening.

Let me know if you want help editing your Helm values file.

I hope this helps.

1 Like

hi @Miquel_Colomer

The variables are present in both pods: the main process pod and the webhook pod. Since I used the secret to inject these variables, it turns out that all pods have access.

The output from my original message here on the forum is the result of running an env command inside the pod. To be completely accurate, only DB_POSTGRESDB_DATABASE isn’t present, but according to the documentation, the default value is n8n:

Any other ideas?

Yes, you’re right — according to the n8n environment variable documentation, if DB_POSTGRESDB_DATABASE is not set, it defaults to n8n.

However, if you’re running in a Kubernetes environment and noticing issues connecting to the database or inconsistent behavior between the main and webhook pods, it’s still recommended to explicitly define all DB-related env vars, even if they have default values.

Here’s why:

  • Defaults might change between versions.
  • Some orchestration setups (like Helm charts or custom scripts) may override or ignore defaults.
  • Troubleshooting is easier when all critical values are explicit.

:white_check_mark: Recommendation

Explicitly define these in your secret or config map:

DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=your-db-host
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD=your-password
DB_POSTGRESDB_DATABASE=n8n

Even if DB_POSTGRESDB_DATABASE=n8n is technically defaulted, setting it removes ambiguity across pods.

Also, double-check that:

  • Your secret is mounted identically into all pods.
  • Your entrypoint or startup script doesn’t override values silently.
  • Any initContainers or sidecars aren’t altering the runtime environment.

Same behavior, all variables are fine.
The main process doesn’t have the SQLite message, only the webhook one.
Any other ideas?

~ $ env |grep -i db
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_TYPE=postgresdb
DB_POSTGRESDB_DATABASE=n8
DB_POSTGRESDB_PASSWORD=**********
DB_POSTGRESDB_HOST=10.105.145.7
~ $ 

k8s spec:

    spec:
      containers:
      - args:
        - webhook
        command:
        - n8n
        env:
        - name: EXECUTIONS_MODE
          value: queue
        - name: LOG_LEVEL
          value: debug
        - name: QUEUE_BULL_REDIS_HOST
          value: tool-1-use4-n8n-valkey-primary
        - name: QUEUE_BULL_REDIS_PORT
          value: "6379"
        envFrom:
        - configMapRef:
            name: tool-1-use4-n8n-app-config
        - secretRef:
            name: tool-1-use4-n8n-app-secret
        - configMapRef:
            name: tool-1-use4-n8n-webhook-config
        image: n8nio/n8n:1.84.1
        imagePullPolicy: Always
        lifecycle: {}
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: http
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: n8n-webhook
        ports:
        - containerPort: 5678
          name: http
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: http
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /home/node/.n8n
          name: data
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 1000
        runAsGroup: 1000
        runAsNonRoot: true
        runAsUser: 1000
      serviceAccount: tool-1-use4-n8n
      serviceAccountName: tool-1-use4-n8n
      terminationGracePeriodSeconds: 30
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: tool-1-use4-n8n

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.