Error in WhatsApp Trigger happening only in local n8n (not in cloud)

Describe the problem/error/question

I have created a minimal workflow to create a WhatsApp chatbot.
This workflow works on the cloud version of n8n but not in my local machine with Docker.
The error happens in the “WhatsApp Trigger”. There I set the corresponding Credential to connect with, and a green message appears saying “Connection tested successfully”. However, if I test the step, I get the error “Bad request. Please check your parameters”.

The workflow and configurations are exactly the same in both environments (cloud and locally).

What is the error message (if any)?

Problem running workflow

*Bad request - please check your parameters*

Show Details
WhatsApp Trigger: Invalid parameter

Please share your workflow

Share the output returned by the last node

The last node doesn’t return anything since the error occurs in the first node.

Does anybody know what it is falling locally? How can I fix it?

Information on your n8n setup

  • n8n version: 1.73.1
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: MacOS
1 Like

I just found a difference between the configuration of the “WhatsApp Trigger” node in the cloud version and my local version. The webhooks URLs in the cloud version point to my n8n.cloud site whereas my local version webhooks point to localhost:5678, which presumably won’t work.

Which values should I put there instead of localhost:5678?

The same thing is happening to me

Hi @rmol,

I’m experiencing the same issue as you. I’m testing a WhatsApp Trigger workflow in my local n8n setup before moving to a paid cloud plan. The connection test succeeds (“Connection tested successfully”), but when I try to execute the step, I get the same error:

Bad request - please check your parameters
WhatsApp Trigger: Invalid parameter

I have little technical knowledge in programming, but I’m curious and willing to learn, so some things take me a bit longer to figure out.

  • Did you manage to fix this issue?
  • If so, what was the solution?

Thanks in advance for any help!

1 Like

I’m looking for an answer too

I’m still having this issue when working with n8n locally with Docker.

What I’ve done is develop this workflow on the cloud platform, where I’m not having this problem.

Regardless, I’d like to know how to fix it properly so any advice is still greatly appreciated.

Triggers will never work on localhost as the external service cannot reach your server. You’d need to set up a tunnel exposing your local machine to the internet. This obviously comes with some risks, so make sure you know what you’re doing :slight_smile:

2 Likes

Thanks @bartv !! What you said makes all the sense, I’ll check that link and try it out.

1 Like

I had the same issue whilst using cloudflare tunnel with https!

solution for me was i had to update the port of N8N to 443
this was after i tried telegram trigger as well it gave me this error " Bad request - please check your parameters

400 - {“ok”:false,“error_code”:400,“description”:“Bad Request: bad webhook: Webhook can be set up only on ports 80, 88, 443 or 8443”}"

so i updated N8N_PORT=443 and the tuneel works perfectly now and whatsapp & telegram trigger node works fine!

If I set the value N8N_PORT=443 in the docker-compose.yml, I can no longer access the UI - neither directly nor via the Cloudflare tunnel.

How did you do that?

im not using docker, straight up deployment on os,
but the keys i changed to be able to have it working are for whatsapp and telegram
N8N_HOST= my cloudflare domain
N8N_PORT=443
N8N_PROTOCOL=https

in the logs when u start the application you must see these changes reflected and it will till you
your web is accessible via https://cloudflaredomain

i think there is more details of config that you need to do to make sure the container have the host domain of cloudflare and you need to map cloudflare to the container host

if you can share more details i might help

version: ‘3.8’
volumes:
db_storage:
n8n_storage:
redis_storage:

x-shared: &shared
restart: always
image: docker.n8n.io/n8nio/n8n
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
- DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
- DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- QUEUE_HEALTH_CHECK_ACTIVE=true
- N8N_ENCRYPTION_KEY=${ENCRYPTION_KEY}
- N8N_SECURE_COOKIE=true
- N8N_PROTOCOL=https
- N8N_HOST=https://xxxxx.com
- WEBHOOK_URL=https://xxxxx.com
- N8N_PORT=443
- NODE_ENV=production
- N8N_TRUSTED_PROXY_IPS=cloudflare
links:
- postgres
- redis
volumes:
- n8n_storage:/home/node/.n8n
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy

services:
postgres:
image: postgres:16
restart: always
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- POSTGRES_DB
- POSTGRES_NON_ROOT_USER
- POSTGRES_NON_ROOT_PASSWORD
volumes:
- db_storage:/var/lib/postgresql/data
- ./init-data.sh:/docker-entrypoint-initdb.d/init-data.sh
healthcheck:
test: [‘CMD-SHELL’, ‘pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}’]
interval: 5s
timeout: 5s
retries: 10

redis:
image: redis:6-alpine
restart: always
volumes:
- redis_storage:/data
healthcheck:
test: [‘CMD’, ‘redis-cli’, ‘ping’]
interval: 5s
timeout: 5s
retries: 10

n8n:
<<: *shared
ports:
- “5678:5678”

n8n-worker:
<<: *shared
command: worker
depends_on:

type or paste code here

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.