Database you’re using (default: SQLite):postgresdb
Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker Caprover
Here is app log
2021-11-25T07:37:19.852587802Z ERROR RESPONSE
2021-11-25T07:37:19.853843756Z Error: The execution ID "2" could not be found.
2021-11-25T07:37:19.853871027Z at WaitTrackerClass.stopExecution (/usr/local/lib/node_modules/n8n/dist/src/WaitTracker.js:58:19)
2021-11-25T07:37:19.853875088Z at processTicksAndRejections (internal/process/task_queues.js:93:5)
2021-11-25T07:37:19.853877437Z at async /usr/local/lib/node_modules/n8n/dist/src/Server.js:1407:30
2021-11-25T07:37:19.853879649Z at async /usr/local/lib/node_modules/n8n/dist/src/ResponseHelper.js:86:26
2021-11-25T07:37:42.958672775Z ERROR RESPONSE
2021-11-25T07:37:42.958724414Z Error: The execution ID "3" could not be found.
2021-11-25T07:37:42.958728487Z at WaitTrackerClass.stopExecution (/usr/local/lib/node_modules/n8n/dist/src/WaitTracker.js:58:19)
2021-11-25T07:37:42.958730931Z at processTicksAndRejections (internal/process/task_queues.js:93:5)
2021-11-25T07:37:42.958733476Z at async /usr/local/lib/node_modules/n8n/dist/src/Server.js:1407:30
2021-11-25T07:37:42.958735746Z at async /usr/local/lib/node_modules/n8n/dist/src/ResponseHelper.js:86:26
2021-11-25T07:42:45.261529978Z ERROR RESPONSE
2021-11-25T07:42:45.261805430Z Error: The execution ID "5" could not be found.
2021-11-25T07:42:45.261809846Z at WaitTrackerClass.stopExecution (/usr/local/lib/node_modules/n8n/dist/src/WaitTracker.js:58:19)
2021-11-25T07:42:45.261812230Z at runMicrotasks (<anonymous>)
2021-11-25T07:42:45.261814605Z at processTicksAndRejections (internal/process/task_queues.js:93:5)
2021-11-25T07:42:45.261816747Z at async /usr/local/lib/node_modules/n8n/dist/src/Server.js:1407:30
2021-11-25T07:42:45.261836380Z at async /usr/local/lib/node_modules/n8n/dist/src/ResponseHelper.js:86:26
2021-11-25T07:44:55.483751668Z ERROR RESPONSE
2021-11-25T07:44:55.483974550Z Error: The execution ID "6" could not be found.
2021-11-25T07:44:55.483981510Z at WaitTrackerClass.stopExecution (/usr/local/lib/node_modules/n8n/dist/src/WaitTracker.js:58:19)
2021-11-25T07:44:55.483984293Z at runMicrotasks (<anonymous>)
2021-11-25T07:44:55.483986810Z at processTicksAndRejections (internal/process/task_queues.js:93:5)
2021-11-25T07:44:55.483989056Z at async /usr/local/lib/node_modules/n8n/dist/src/Server.js:1407:30
2021-11-25T07:44:55.483991300Z at async /usr/local/lib/node_modules/n8n/dist/src/ResponseHelper.js:86:26
Hi @Bogdan_Mind, I’m sorry to hear you’re running into trouble. I am not familiar with Caprover, so I don’t know exactly what they are doing when deploying n8n unfortunately. Does your workflow just consist of the start node? Chances are it just has finished when you were trying to stop it.
As for Trigger nodes, once you manually execute them, they’d would wait for the defined event to arrive (or until it times out). So seeing this loading screen would be expected.
2021-11-28T14:15:54.660027981Z INFO: Started with migration for wait functionality.
2021-11-28T14:15:54.660031365Z Depending on the number of saved executions, that may take a little bit.
2021-11-28T14:15:54.660034718Z
2021-11-28T14:15:54.660037689Z
2021-11-28T14:15:54.666144859Z Start migration UpdateWorkflowCredentials1630330987096
2021-11-28T14:15:54.674159763Z UpdateWorkflowCredentials1630330987096: 7.869ms
2021-11-28T14:15:54.694999646Z n8n ready on 0.0.0.0, port 5678
2021-11-28T14:15:54.695024530Z Version: 0.151.0
2021-11-28T14:15:54.713491155Z
2021-11-28T14:15:54.713506969Z Editor is now accessible via:
2021-11-28T14:15:54.713510959Z http://localhost:5678/
2021-11-28T14:15:55.281315921Z
2021-11-28T14:15:55.281341502Z Stopping n8n...
I have it deployed via docker image in caprover
And caprover handles reverse proxy via nginx NGINX Config · CapRover
When i force https - i get too many redirects error. But without it, i still can use https, but parcially n8n is not operable due to mixed content error(
<%
if (s.forceSsl) {
%>
server {
listen 80;
server_name <%-s.publicDomain%>;
# Used by Lets Encrypt
location /.well-known/acme-challenge/ {
root <%-s.staticWebRoot%>;
}
# Used by CapRover for health check
location /.well-known/captain-identifier {
root <%-s.staticWebRoot%>;
}
location / {
return 302 https://$http_host$request_uri;
}
}
<%
}
%>
server {
<%
if (!s.forceSsl) {
%>
listen 80;
<%
}
if (s.hasSsl) {
%>
listen 443 ssl http2;
ssl_certificate <%-s.crtPath%>;
ssl_certificate_key <%-s.keyPath%>;
<%
}
%>
client_max_body_size 500m;
server_name <%-s.publicDomain%>;
# 127.0.0.11 is DNS set up by Docker, see:
# https://docs.docker.com/engine/userguide/networking/configure-dns/
# https://github.com/moby/moby/issues/20026
resolver 127.0.0.11 valid=10s;
# IMPORTANT!! If you are here from an old thread to set a custom port, you do not need to modify this port manually here!!
# Simply change the Container HTTP Port from the dashboard HTTP panel
set $upstream http://<%-s.localDomain%>:<%-s.containerHttpPort%>;
location / {
<%
if (s.httpBasicAuthPath) {
%>
auth_basic "Restricted Access";
auth_basic_user_file <%-s.httpBasicAuthPath%>;
<%
}
%>
proxy_pass $upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
<%
if (s.websocketSupport) {
%>
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
<%
}
%>
}
# Used by Lets Encrypt
location /.well-known/acme-challenge/ {
root <%-s.staticWebRoot%>;
}
# Used by CapRover for health check
location /.well-known/captain-identifier {
root <%-s.staticWebRoot%>;
}
error_page 502 /captain_502_custom_error_page.html;
location = /captain_502_custom_error_page.html {
root <%-s.customErrorPagesDirectory%>;
internal;
}
}
Other than setting the n8n protocol to HTTP it looks ok but that nginx file seems to use a lot of variables. It could be one of those not playing ball.
Have you tried using a simplified proxy setup? It is tricky to work out the file actually contains once the dynamic options are set.
My update:
I managed to make it work on https, but still get freezes after lanuching any action(while on https protocol) Like it can’t communicate with backend with mixed content warnings.
Take a look : (Maybe this behavior of docker / application / ngix will tell you something)
I comment: in the application, as a standard, I press execute a cube (get request, for example) while the https protocol is enabled - it hangs up as always.
But!! , if at this moment you update the nginx settings (if you press save & update in the console, then the caprover reloads this docker instance and nginx settings), then the REQUEST WILL BE FULFILLED!)
Then, of course, 10-15 seconds error 502, the container is loaded. And again you can refresh the page and it won’t work when you push it.
Maybe you have any suggestions what this might be related to?