Telegram trigger node issue – messages stop being delivered after 15-20 minutes

Hey there! I’ve been racking my brain over this.

I have a Telegram trigger node set up to wait for messages from a bot. The workflow is active, and when I send a message to the bot, the trigger fires, and the message gets processed—so everything works as expected… at first.

But after about 15-20 minutes, messages sent to the bot no longer reach n8n, even though the workflow remains active.

When I call the Telegram API (https://api.telegram.org/bot{{token}}/getWebhookInfo), I get this response:
getWebhookInfo

Has anyone run into this before? Any ideas what might be causing it?

I’ve had a similar issue with the Webhook node.
I set up a simple Webhook to handle GET requests—it also works initially, but after some time, requests start returning a 404 (“handler not registered”) error.
Webhook node
Never seen this before.

Oddly enough, I have another server running the same n8n version in Docker, and it works flawlessly there.

Any suggestions?

hello @korovaevda

seems some network/deployment issue. How is your n8n configured? Do you have a proxy in front of it?

n8n works in docker. reverse proxy is configured. I have been working on these settings for the last few months. Everything was fine. But last week I started to notice that telegram bots work for a while, but then they start to fall off. As if telegram itself cannot send a webhook to n8n and records a 403 error. I do not understand how this can be. Why, when connecting a workflow, the bot works normally, but after some time n8n stops receiving webhooks. What can happen during this period?

so i launch the container:

docker run -d --rm --name n8n -p 5678:5678 --dns 8.8.8.8 --dns 1.1.1.1 -e NODE_ENV=production -e N8N_PROTOCOL=https -e N8N_HOST=n8n.mydomain -e WEBHOOK_URL=https://n8n.mydomain -e EXECUTIONS_DATA_PRUNE=true -e EXECUTIONS_DATA_MAX_AGE=168 -e NODE_FUNCTION_ALLOW_BUILTIN=* -e NODE_FUNCTION_ALLOW_EXTERNAL=langchain,mongodb,moment,lodash,bson -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n:1.88.0

This is nginx configs

proxy_params file:
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection “upgrade”;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
proxy_read_timeout 600s;
proxy_buffering off;
proxy_cache off;

n8n_nginx_conf_file:
upstream my-domain {
server public-ip-address:5678;
}

server {
server_name my-domain www.my-domain ;

listen public-ip-address:80;
listen public-ip-address:443 ssl;

ssl_certificate "/var/www/httpd-cert/n8n_2025-03-16-16-09_53.crt";
ssl_certificate_key "/var/www/httpd-cert/n8n_2025-03-16-16-09_53.key";

charset utf-8;
gzip on;
gzip_min_length 1024;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css image/x-ico application/pdf image/jpeg image/png image/gif application/javascript application/x-javascript application/x-pointplus;
gzip_comp_level 1;

set $root_path /var/www/data/www/my-domain;

root $root_path;
disable_symlinks if_not_owner from=$root_path;

location / {
    proxy_pass http://my-domain:5678;
    include /etc/nginx/proxy_params;
}

location ~* ^.+\.(jpg|jpeg|gif|png|svg|js|css|mp3|ogg|mpeg|avi|zip|gz|bz2|rar|swf|ico|7z|doc|docx|map|ogg|otf|pdf|tff|tif|txt|wav|webp|woff|woff2|xls|xlsx|xml)$ {
    try_files $uri $uri/ @fallback;
}

location @fallback {
    proxy_pass http://my-domain:5678;
    include /etc/nginx/proxy_params;
}

}

I really need your help because I have no idea how to deal with this issue

I think I figured out what the problem is.
And this is probably another n8n error
Here’s the gist
There is a workflow. It has a telegram trigger node. The process is running and activated.
I copy this node, create a new workflow and paste the node into it.
Setting it up to work with another bot, change name. I save and activate second process. As a result, in the internal n8n database in the webhook_entity table, the trigger data from the first process will be overwritten by the data from the second process.
As a result, only the bot from the second process will work. And the first one, alas, has fallen off.

Why a separate identifier for the webhook is not created when copying/pasting a node is a mystery

Hi,
Well it’s kind of obvious that by copy and pasting a webhook you also copy it’s UUID which is the unique identifier for each webhook.

After that, there is no surprise that everything gets mixed up internally.

For webhook/triggers you should always create a fresh one .

To the question: “why isn’t it generating a new one”. Well there are cases where it needs to stay the same.

Imho to documentation needs to be clarified (if it’s not already there) and/or the UI need to notify.

Regards
J.

Firstly, it is not at all obvious. There is a concept of a Full copy of an object - in fact, a clone, on the other hand - a new instance. It inherits some properties of the parent object, but it is an independent copy that lives its own life.
Secondly, it is really necessary to explicitly indicate when copying that a clone is being created and certain effects are possible.

1 Like

Well, like I said improvements can be made but what is a requirement for somebody isn’t necessary for the other. You can always launch a feature request such a feature.

Reg,
J

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.