Can't launch webhooks, 'unable to find data of execution'

Describe the bug
I tried making a few workflows using webhook as a starting point. When I execute workflows manually, everything works. When executing using webhook test mode, everything works. The issue arises when I try to go into production mode. It just doesn’t fire and throws an error. When trying to see the error in Workflow Executions it says Failed and: “Workflow Execution finished with an error. Unable to find data of execution “{number}” in database. Aborting execution.” Thus I can’t even check what’s wrong.

Expected behavior
It should fire with the needed data.

Environment (please complete the following information):

  • OS: Ubuntu Linux 20.04.6 LTS
  • n8n Version 1.8.2
  • Node.js Version v10.19.0
  • Database system postgres 12.16 and redis connected
  • Operation mode queue
  • Installed using docker

Additional context
Postgres volume: /var/lib/postgresql/data
Setup on subdomain, but didn’t do anything aditional with SSL

docker postgres logs:

2023-10-19 17:17:48.012 UTC [40271] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)
2023-10-19 17:17:48.246 UTC [40274] ERROR: duplicate key value violates unique constraint “pk_workflow_statistics”
2023-10-19 17:17:48.246 UTC [40274] DETAIL: Key (“workflowId”, name)=(1SOPOpWbvkYCgKKO, data_loaded) already exists.
2023-10-19 17:17:48.246 UTC [40274] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)
2023-10-19 17:17:48.424 UTC [40271] ERROR: duplicate key value violates unique constraint “pk_workflow_statistics”
2023-10-19 17:17:48.424 UTC [40271] DETAIL: Key (“workflowId”, name)=(1SOPOpWbvkYCgKKO, data_loaded) already exists.
2023-10-19 17:17:48.424 UTC [40271] STATEMENT: INSERT INTO “public”.“workflow_statistics”(“count”, “latestEvent”, “name”, “workflowId”) VALUES ($1, $2, $3, $4)
2023-10-19 17:17:50.169 UTC [40271] ERROR: duplicate key value violates unique constraint “pk_workflow_statistics”
2023-10-19 17:17:50.169 UTC [40271] DETAIL: Key (“workflowId”, name)=(1SOPOpWbvkYCgKKO, data_loaded) already exists.

database records:
n8n_db=# SELECT * FROM public.workflow_statistics WHERE “workflowId” = ‘1SOPOpWbvkYCgKKO’ AND name = ‘data_loaded’;
count | latestEvent | name | workflowId
-------±------------------------±------------±-----------------
1 | 2023-10-15 12:09:25.496 | data_loaded | 1SOPOpWbvkYCgKKO
(1 row)

yml file:

services:
caddy:
image: caddy:latest
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- caddy_data:/data
- ${DATA_FOLDER}/caddy_config:/config
- ${DATA_FOLDER}/caddy_config/Caddyfile:/etc/caddy/Caddyfile

n8n:
image: docker.n8n.io/n8nio/n8n
restart: always
ports:
- 5678:5678
environment:
- N8N_HOST=${SUBDOMAIN}.${DOMAIN}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- DB_TYPE=postgresdb
- DB_POSTGRESDB_DATABASE=n8n_db
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_USER=n8n_user
- DB_POSTGRESDB_PASSWORD=my_redaced_password
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379
- N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN=true
volumes:
- n8n_data:/home/node/.n8n
- ${DATA_FOLDER}/local_files:/files

postgres:
image: postgres:latest
restart: always
environment:
- POSTGRES_DB=n8n_db
- POSTGRES_USER=n8n_user
- POSTGRES_PASSWORD=my_redacted_password
volumes:
- postgres_data:/var/lib/postgresql/data # This line is for data persistence

redis:
image: redis:latest
restart: always
ports:
- "6379:6379"

n8n_worker:
image: docker.n8n.io/n8nio/n8n
restart: always
command: worker
environment:
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_BULL_REDIS_PORT=6379

volumes:
caddy_data:
external: true
n8n_data:
external: true
postgres_data:

caddy:

myredactedsubdomain.myredacteddomain.com {
reverse_proxy n8n:5678 {
flush_interval -1
}
}

tried solving this with gpt4, had me run this, and still nothing after restarting docker:
ALTER TABLE public.workflow_statistics DROP CONSTRAINT pk_workflow_statistics;

Hi @Dominic_Jay, welcome to the community. I am sorry you’re having trouble.

The PostgreSQL error duplicate key value violates unique constraint “pk_workflow_statistics” shouldn’t cause the behaviour you have reported, though I very much understand how irritating these are.

As for the actual problem it seems your n8n main instance is connected to your PostgreSQL database, but no PostgreSQL details are specified for the worker in your docker-compose.yml. So your worker (used for your production execution) would try and read from the default SQLite database (which will of course not have the execution details).

Perhaps you can update your docker compose file to include DB_TYPE, DB_POSTGRESDB_HOST, etc. with the same values your main instance uses?

Just to add I have closed the GitHub issue for this that can be found here: Can't launch webhooks, says unable to find data of execution · Issue #7470 · n8n-io/n8n · GitHub so that we don’t split the focus.

1 Like

Appreciate it Tom! @MutedJam

Now, while the webhook works, I’m getting credentials issue now.

Problem loading credential

Malformed UTF-8 data

Or that encryption key is different and couldn’t decrypt.

Why is that?

Thanks!

Or that encryption key is different and couldn’t decrypt.

Hi @Dominic_Jay, this sounds like a deployment problem. n8n would generate a new encryption key upon launch if it can’t find an existing encryption key in it’s .n8n directory.

This key is required to decrypt credentials in your database. Did you by any chance import any data directly into the database or remove your n8n docker volume during the lifecycle of your n8n instance? If so, you might need to re-create the credentials used in your workflow.

With regards to the malformed data it’d be great to have an example. Perhaps you can share the webhook node you’re currently using as well as the request you’re sending to your webhook URL using the cURL format? This would allow me to reproduce your problem.

Hi, since the latest version, I am also getting the below error for every single credential, whether accessing them through the “Credentials” page or from Workflows. Not a single workflow requiring credentials will run. Encryption key hasn’t changed, as well as any other parameters between versions.

Problem loading credential

Malformed UTF-8 data

Here is what I am getting from a postgres node trying to access credentials in a workflow:

Error: Malformed UTF-8 data
    at Object.stringify (/usr/local/lib/node_modules/n8n/node_modules/crypto-js/core.js:523:24)
    at WordArray.init.toString (/usr/local/lib/node_modules/n8n/node_modules/crypto-js/core.js:278:38)
    at Cipher.decrypt (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/Cipher.js:26:61)
    at Credentials.getData (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/Credentials.js:30:43)
    at CredentialsHelper.getDecrypted (/usr/local/lib/node_modules/n8n/dist/CredentialsHelper.js:202:51)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at getCredentials (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/NodeExecuteFunctions.js:1260:33)
    at Object.router (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Postgres/v2/actions/router.js:36:25)
    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:670:19)
    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:652:53

This is a high criticality issue

1 Like

Hi @oiseaudefeu, I am sorry you’re having trouble. Could you please open a new topic for this providing all the information requested in the template along with detailed steps on how to reproduce the problem (especially which services might sit between the machine sending the webhook and n8n)?

Hi Tom @MutedJam !

I’m having the exact same issue tbh.

I also updated (after this error) to AI beta version.

Whenever I create another credential, the issue remains, so just reconnecting all wouldn’t work.
This is the error I’m getting after adding the postgres credentials to n8n worker.

ERROR: Credentials could not be decrypted. The likely reason is that a different “encryptionKey” was used to encrypt the data.

NodeApiError: Credentials could not be decrypted. The likely reason is that a different "encryptionKey" was used to encrypt the data.
    at Object.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/NodeExecuteFunctions.js:1153:19)
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at Object.slackApiRequest (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Slack/V2/GenericFunctions.js:33:22)
    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/Slack/V2/SlackV2.node.js:533:44)
    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:658:19)
    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:631:53

Hi @Dominic_Jay, NodeApiError: Credentials could not be decrypted isn’t the same error as reported in the previous post (Malformed UTF-8 data).

As mentioned previously, this problem suggests your n8n instances do not have access to the existing encryption key anymore. I’ve made a few guesses based on the information you have provided, but without exact steps to reproduce your problem I will not be able to confirm this with certainty.

What you might want to check is:

  1. Do all your n8n instances (main and workers) access the same encryption key (see Configuring queue mode | n8n Docs)?
  2. Did you remove your docker volume at any point? This is where the encryption key is stored, if you remove it, n8n would generate a new key and can no longer decrypt existing credentials.
  3. Have you accounted for the user/permissions change introduced with v1?

If you no longer have access to a previously used encryption key you might need to delete and re-create your credentials to resolve the problem. Seeing you have manually modified the database schema, I’d also recommend you start over with a completely fresh n8n instance and database to avoid even more problems down the line. You can export your workflows from the existing instance using the CLI, and then import them on your new instance.

New version [email protected] got released which includes the GitHub PR 7824.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.