Repeated 403 errors on all HTTP requests

Describe the problem/error/question

All of my workflows that contain an http request node regardless of its destination (ERP endpoint, Mistral,… anything really!) are failing. And I repeatedly get 403 status in my console log. Even previously published workflows are now failing due to this problem.

What is the error message (if any)?

Failed to load resource: the server responded with a status of 403 ()

Please share your workflow

ALL nodes that make HTTPS calls are failing

Share the output returned by the last node

Not Applicable

Information on your n8n setup

  • n8n version: 2.6.3
  • Database (default: SQLite): sqlite
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n Cloud
  • Operating system: Windows 10

Hi @MohammadG

Welcome to the n8n community :tada: !

This kind of “everything with HTTP suddenly returns 403” almost never comes from each target API at once, it’s typically something in front of your n8n instance (proxy, WAF, Cloudflare, etc.) blocking outbound or /rest traffic.

Have you tried restarting the instance?

Hi Tamy - I’m on n8n’s pro cloud license, is it possible to restart the instance by myself?

@MohammadG

If you are the owner or workspace sysadmin, yes.
Admin panel access; Cloud admin dashboard]

I have restarted the instance but the error is still coming back.

The input payload is this:

{"invoiceNumber":"W12345","invoiceDate":"2025-12-10","shipFromAddress":"123 Google Drive, Sacramento, CA","shipToAddress":"456 Box Drive, San Francisco, CA","lineItemAmount":109.33,"lineNumber":1,"vendorNumber":2335}

It is fed through a Basic LLM Chain that has a system role and a user role. The above stringified object is what the ‘user role’ sends to the system. The model used is Mistral, and a structured output subnode is used (json schema).

I’m still getting 403() but now I am also seeing this (new didn’t see it before):
Unexpected HTTP client error: TypeError: Failed to parse URL from [object Request]

Also, the error is coming back ~30 seconds later. Yesterday it was coming back 10+ minutes later.

@MohammadG

From what I understand, n8n is automatically re-executing a workflow that previously failed. During one of the retries, an HTTP-related node ends up receiving a Request object instead of a URL string, which causes the Failed to parse URL error. The recurring 403 appears to be a side effect of the same execution loop, not an authentication issue by itself.

Practical steps to resolve this:

  • Temporarily disable the trigger (Webhook / MCP / Tool) to stop automatic re-executions.
  • Clear any pending or running executions from the Executions view.
  • Run the workflow manually once to validate the behavior.
  • Review any dynamically populated URL fields and ensure they always resolve to a string, not to $json, $request, or a full node output object.
  • If using MCP or Tool nodes, be aware that retries can reuse execution state, so validating inputs on each run is essential.

If the issue persists after applying the steps above, could you please share a bit more detail so we can narrow it down?

Hi @tamy.santos,

Here are my findings after trying your solution and doing other extensive testing:

  1. The Basic LLM chain always fails no matter what you give it. It throws the type error even after re-running the workflow, refreshing the page, using a small dataset…etc. I believe it is incompatible with v2 of n8n.

  1. The HTTP node using the same model is successful for a small list of items (5 items to be exact) but fails due to the default timeout for a larger payload (106 items). My guess is that n8n’s WAF is preventing additional requests beyond a mysterious number from being fulfilled.

I have given up on the Basic LLM Chain because it is clearly not going to work due to incompatibility. However, I have hope for the HTTP node. I have provided the input in my message, what other details would you like me to share?

@MohammadG

It’s not a WAF, nor a hidden limit, nor a problem with Mistral.
It’s concurrency + unstable node + automatic re-execution.

If possible, please review this checklist which follows the official n8n documentation and best practices:

Do not use the Basic LLM Chain in this scenario
Use a direct HTTP Request Node for the LLM
Add Split In Batches before the HTTP Request
Batch size: 5 or 10
Increase the HTTP Request timeout
Ensure the URL is always a literal string
Cancel pending executions and disable/re-enable the workflow

With this, I’ve reached the end of the possible solutions I know of. I apologize in advance if it doesn’t work, lol. :rofl:

1 Like

Thank you for the workaround, Tamy!

The n8n mod team should definitely be aware of the LLM chain bug and shed some light on the WAF and server capabilities to understand the hidden cause of these errors.

Not sure how to get their attention to this but I hope they see this (and respond to it!)

You’re welcome, I’m always happy to help.
They are probably aware, but as far as I know, the backlog is big.