AI Agent Error. Process Terminated

Describe the problem/error/question

Hi, I’m using self-hosted n8n. My flow is to get ChatGPT opinion about the data I send to it.I have around 30 data which mean i do looping for 30 times.everything works perfect until 20 data.but in the 21st round, i get this error.

What is the error message (if any)?

TypeError: terminated at Fetch.onAborted (node:internal/deps/undici/undici:11132:53) at Fetch.emit (node:events:519:28) at Fetch.terminate (node:internal/deps/undici/undici:10290:14) at Object.onError (node:internal/deps/undici/undici:11253:38) at Request.onError (node:internal/deps/undici/undici:2094:31) at Object.errorRequest (node:internal/deps/undici/undici:1591:17) at TLSSocket.<anonymous> (node:internal/deps/undici/undici:6319:16) at TLSSocket.emit (node:events:531:35) at node:net:346:12 at TCP.done (node:_tls_wrap:650:7)

Please share your workflow


Share the output returned by the last node

{
“errorMessage”: “terminated”,
“errorDetails”: {},
“n8nDetails”: {
“n8nVersion”: “1.112.6 (Self Hosted)”,
“binaryDataMode”: “default”,
“stackTrace”: [
“TypeError: terminated”,
" at Fetch.onAborted (node:internal/deps/undici/undici:11132:53)“,
" at Fetch.emit (node:events:519:28)”,
" at Fetch.terminate (node:internal/deps/undici/undici:10290:14)“,
" at Object.onError (node:internal/deps/undici/undici:11253:38)”,
" at Request.onError (node:internal/deps/undici/undici:2094:31)“,
" at Object.errorRequest (node:internal/deps/undici/undici:1591:17)”,
" at TLSSocket. (node:internal/deps/undici/undici:6319:16)“,
" at TLSSocket.emit (node:events:531:35)”,
" at node:net:346:12",
" at TCP.done (node:_tls_wrap:650:7)"
]
}
}

Information on your n8n setup

  • n8n version: 1.112.6
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker self hosted
  • Operating system:

The error it’s coming from ChatGPT not n8n.

Need to check your openai log to see why it shows terminate

Best guess it’s you hit your openai limit :

There are 5 level of your openai api key. Then there is RPM TPM limit.

I guess you hit the TPM maybe. If you are Tier1 and sending or generating a lot of tokens in a short time.