Requests stopped working for n8n >= 1.0 hosted on Caprover for Telegram, Notion

Describe the problem/error/question

Hey there. I have issues after updating n8n to the newest version - I am not able to verify my Telegram and Notion credentials and triggers are not working as well, here is a debug log that appears when I am trying to verify credentials or just launch the trigger in the workflow:

2023-09-10T13:47:33.767860185Z 2023-09-10T13:47:33.767Z | error | NodeApiError: Request failed with status code 404 "{ file: 'ErrorReporterProxy.js', function: 'report' }"
2023-09-10T13:47:33.768401198Z 2023-09-10T13:47:33.768Z | error | Error: Request failed with status code 404 "{ file: 'ErrorReporterProxy.js', function: 'report' }"
2023-09-10T13:47:36.735090091Z 2023-09-10T13:47:36.734Z | error | NodeApiError: Request failed with status code 401 "{ file: 'ErrorReporterProxy.js', function: 'report' }"
2023-09-10T13:47:36.735502282Z 2023-09-10T13:47:36.735Z | error | Error: Request failed with status code 401 "{ file: 'ErrorReporterProxy.js', function: 'report' }"

What I have already tried:

  • N8N_USE_DEPRECATED_REQUEST_LIB set to true, but I noticed it was removed in >= 1.0
  • Downgrading to the older version (pre-1.0) but experienced the same issue as mentioned here: Downgrade n8n version
  • Double-checked my ngix configuration file, but haven’t found any issues:
<%
if (s.forceSsl) {
%>
    server {

        listen       80;

        server_name  <%-s.publicDomain%>;

        # Used by Lets Encrypt
        location /.well-known/acme-challenge/ {
            root <%-s.staticWebRoot%>;
        }

        # Used by CapRover for health check
        location /.well-known/captain-identifier {
            root <%-s.staticWebRoot%>;
        }

        location / {
            return 302 https://$http_host$request_uri;
        }
    }
<%
}
%>


server {

    <%
    if (!s.forceSsl) {
    %>
        listen       80;
    <%
    }
    if (s.hasSsl) {
    %>
        listen              443 ssl http2;
        ssl_certificate     <%-s.crtPath%>;
        ssl_certificate_key <%-s.keyPath%>;
    <%
    }
    %>

        client_max_body_size 500m;

        server_name  <%-s.publicDomain%>;

        # 127.0.0.11 is DNS set up by Docker, see:
        # https://docs.docker.com/engine/userguide/networking/configure-dns/
        # https://github.com/moby/moby/issues/20026
        resolver 127.0.0.11 valid=10s;
        # IMPORTANT!! If you are here from an old thread to set a custom port, you do not need to modify this port manually here!!
        # Simply change the Container HTTP Port from the dashboard HTTP panel
        set $upstream http://<%-s.localDomain%>:<%-s.containerHttpPort%>;

        location / {


	<%
	if (s.redirectToPath) {
	%>
	    return 302 <%-s.redirectToPath%>;
	<%
	} else {
	%>

		    <%
		    if (s.httpBasicAuthPath) {
		    %>
			    auth_basic           "Restricted Access";
			    auth_basic_user_file <%-s.httpBasicAuthPath%>; 
		    <%
		    }
		    %>

			    proxy_pass $upstream;
			    proxy_set_header Host $host;
			    proxy_set_header X-Real-IP $remote_addr;
			    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
			    proxy_set_header X-Forwarded-Proto $scheme;

		    <%
		    if (s.websocketSupport) {
		    %>
			    proxy_set_header Upgrade $http_upgrade;
			    proxy_set_header Connection "upgrade";
			    proxy_http_version 1.1;
		    <%
		    }
		    %>
    
    
	<%
	}
	%>
	
        }

        # Used by Lets Encrypt
        location /.well-known/acme-challenge/ {
            root <%-s.staticWebRoot%>;
        }
        
        # Used by CapRover for health check
        location /.well-known/captain-identifier {
            root <%-s.staticWebRoot%>;
        }

        error_page 502 /captain_502_custom_error_page.html;
        location = /captain_502_custom_error_page.html {
                root <%-s.customErrorPagesDirectory%>;
                internal;
        }
}

Also checked similar topics, but couldn’t find any solution:

Information on your n8n setup

  • n8n version: 1.6.0
  • Database (default: SQLite): Postgres
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Caprover (docker)
  • Operating system: Ubuntu

Hi @Jane, welcome to the community. I am very sorry you’re having trouble.

Unfortunately I am not able to reproduce this based on the description you have provided and suspect this is related to your specific migration path or environment (I am not familiar with CapRover).

Does this only happen for existing credentials/is creating new credentials working for you?

If not, perhaps you want to simply export your credentials & workflows from your old n8n instance using the CLI, spin up a fresh instance against a clean database (this should rule out any migration issues) and then import workflows & credentials again?

1 Like

Installing clean instance solved the issue, thank you.

2 Likes

Glad to hear, thanks so much for confirming! I’m truly sorry for the trouble you had in the first place.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.