Hello,
I’m currently integrating OpenAI’s Chat Model in my n8n workflow, and I’m encountering an issue that I need your assistance with.
Environment Details:
- The network is set up behind a corporate proxy.
- I’ve already whitelisted
api.openai.com
. - Credentials have been created successfully, and tests also pass without any issues.
Issue: However, when I attempt to use the OpenAI Chat Model within a workflow using the LLM Chain, I receive the following error:
[cause]: TypeError: fetch failed
at node:internal/deps/undici/undici:13185:13
at processTicksAndRejections (node:internal/process/task_queues:95:5) {
[cause]: DOMException [Error]: Request was cancelled.
at new DOMException (node:internal/per_context/domexception:53:5)
at makeAppropriateNetworkError (node:internal/deps/undici/undici:9032:182)
at httpNetworkFetch (node:internal/deps/undici/undici:10742:18)
at processTicksAndRejections (node:internal/process/task_queues:95:5)
at httpNetworkOrCacheFetch (node:internal/deps/undici/undici:10617:33)
at httpFetch (node:internal/deps/undici/undici:10450:37)
at node:internal/deps/undici/undici:10212:20
at mainFetch (node:internal/deps/undici/undici:10202:20) {
cause: [RequestAbortedError]
}
}
}
Error in handler N8nLlmTracing, handleLLMStart: TypeError: fetch failed
Question:
- Besides
api.openai.com
, are there any other domains or endpoints that need to be whitelisted to ensure successful communication with OpenAI’s services? - Has anyone experienced similar issues when using n8n in a proxy-restricted environment, and what additional steps can I take to resolve this?
Any insights or suggestions would be greatly appreciated!
Thank you!
Information on your n8n setup
- n8n version: 1.72.1
- Database (default: SQLite): Postgresql
- n8n EXECUTIONS_PROCESS setting (default: own, main): own
- Running n8n via (Docker, npm, n8n cloud, desktop app): k8s
- Operating system: Ubuntu 22.04.5 LTS