OpenAI Chat Model Fetch Error Behind Corporate Proxy

Hello,

I’m currently integrating OpenAI’s Chat Model in my n8n workflow, and I’m encountering an issue that I need your assistance with.

Environment Details:

  • The network is set up behind a corporate proxy.
  • I’ve already whitelisted api.openai.com.
  • Credentials have been created successfully, and tests also pass without any issues.

Issue: However, when I attempt to use the OpenAI Chat Model within a workflow using the LLM Chain, I receive the following error:

[cause]: TypeError: fetch failed
at node:internal/deps/undici/undici:13185:13
at processTicksAndRejections (node:internal/process/task_queues:95:5) {
[cause]: DOMException [Error]: Request was cancelled.
at new DOMException (node:internal/per_context/domexception:53:5)
at makeAppropriateNetworkError (node:internal/deps/undici/undici:9032:182)
at httpNetworkFetch (node:internal/deps/undici/undici:10742:18)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at httpNetworkOrCacheFetch (node:internal/deps/undici/undici:10617:33)
at httpFetch (node:internal/deps/undici/undici:10450:37)
at node:internal/deps/undici/undici:10212:20
at mainFetch (node:internal/deps/undici/undici:10202:20) {
cause: [RequestAbortedError]
}
}
}
Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed

Question:

  1. Besides api.openai.com, are there any other domains or endpoints that need to be whitelisted to ensure successful communication with OpenAI’s services?
  2. Has anyone experienced similar issues when using n8n in a proxy-restricted environment, and what additional steps can I take to resolve this?

Any insights or suggestions would be greatly appreciated!

Thank you!

Information on your n8n setup

  • n8n version: 1.72.1
  • Database (default: SQLite): Postgresql
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): k8s
  • Operating system: Ubuntu 22.04.5 LTS

I’m experiencing an issue with the OpenAI nodes in n8n. The regular OpenAI node works perfectly fine for my use case, but whenever I try to use the OpenAI Chat Model node with Basic LLM Chain node, I encounter a timeout error.

hi @terra_ria

Thanks for sharing this walkthrough! Could be some dependency making a call getting blocked by your corporate proxy. Is there a way you can talk to your IT team to figure out what comms are getting blocked from your local device? They should be able to figure out exactly what’s causing the block.

2 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.