Cannot activate any webhook in Production

Those are passed using env variables.

Docker inspect results (just put XXXX on anything that looked risky to disclose):

"Config": {
            "Hostname": "533137278eb9",
            "Domainname": "",
            "User": "node",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "5678/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "DB_POSTGRESDB_USER=n8n",
                "DB_POSTGRESDB_PASSWORD=XXXXX",
                "QUEUE_BULL_REDIS_HOST=ecXXXXX.amazonaws.com",
                "N8N_ENCRYPTION_KEY=XXXXX",
                "DB_TYPE=postgresdb",
                "DB_POSTGRESDB_HOST=ecXXXXX.amazonaws.com",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "NODE_VERSION=18.18.2",
                "YARN_VERSION=1.22.19",
                "NODE_ICU_DATA=/usr/local/lib/node_modules/full-icu",
                "N8N_VERSION=1.11.2",
                "NODE_ENV=production",
                "N8N_RELEASE_TYPE=stable"
            ],
            "Cmd": [
                "worker"
            ],
            "Image": "docker.n8n.io/n8nio/n8n",
            "Volumes": null,
            "WorkingDir": "/home/node",
            "Entrypoint": [
                "tini",
                "--",
                "/docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "4f2b7ac6d1b37ef1c215003ec6c5edff2df51b8672676659b9e798bc7bcac015",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "5678/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "5679"
                    },
                    {
                        "HostIp": "::",
                        "HostPort": "5679"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/4f2b7ac6d1b3",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "4ae2a9bb03699172b8e3f06e155e850dcc3212e26fd95126b0740d84725119ae",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "c29e39813cbc606c76e823735834fcde7de36d1309b3c37166a94b4d0655936f",
                    "EndpointID": "4ae2a9bb03699172b8e3f06e155e850dcc3212e26fd95126b0740d84725119ae",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }
    }

That looks a lot better and doesn’t match the command sent :slight_smile: I am not sure why that is failing I am using a similar setup (no AWS) and everything is all good for me.

@MutedJam can you see anything I might have missed?

1 Like

I cannot unfortunately, but the Unknown column ‘ExecutionEntity.data’ in ‘field list’ makes me wonder if you somehow ended up with an unexpected database state here @Joachim_Brindeau.

Is this also happening for a completely new Postgres database for you?

1 Like

We did an entire clean install of everything and the issue remains. :weary:
We did reimport workflows and credentials but this is a new workflow.

Available all day (national holiday here in france) if you want to have a Google meet @MutedJam

For clarity, this exact setup used to work flowlessly on version 0.
EDIT: actually no we used to have two workers but the fresh install has one only to troubleshoot.

Hi @Joachim_Brindeau, is there a chance at least one of your workers is running a different version of n8n than your main instance? This would explain why n8n is sometimes struggling to find execution data. A while back it was moved out of the execution_entity table into its own database table, so an older n8n version on one of your worker wouldn’t be able to find it anymore.

Can you try pinning a specific version in your configuration files/docker run commands? I.e. instead of running n8nio/n8n, try referencing n8nio/n8n:1.14.2 for both your main instance and all of your workers.

I added the column to the table and tested multiple other things, it works and but I don’t know why.

That’s more or less the message from my colleague who managed to solve the issue. :joy:

Thank you both for your help everything runs smoothly

Hey @Joachim_Brindeau, I am glad to hear this is working, though I am still not sure what exactly has caused the problem on your end and what state your database is in now. I would definitely expect future migration problems in such a scenario.

So I’d recommend you export your credentials and workflows as soon as possible through the CLI, then set up a fresh n8n instance using a fresh Postgres database and importing creds + workflows again. When deploying n8n you want to make sure to pin the version of both your main instance and your worker instances to avoid any inconsistencies as suggested above.

1 Like

Unfortunately fresh deployment is exactly what we did and we still had the issue.
But we didn’t pin the versions so I’ll keep it in mind for next time! Thank you!