My ai agent answer very slow

Describe the problem/error/question

Hello, my ai agent answer very slow
and i dont use any proxy

What is the error message (if any)?

Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.78.1
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • **Operating system:**Windows 10

I don’t think thats enough information to get help with. Maybe make a screen recording in loom so we can see what you are talking about.

Just here have timeout for 10-15 secs.
and after answer
and in console i see this error
Error in handler N8nLlmTracing, handleLLMEnd: TypeError: fetch failed

Same here but with OpenAI. Chat works, but I’ve got long loading circle arrows on AI Agent (around 20 sec), and no loading circle arrows on OpenAI. But after 20 sec I recieve reply. I use OpenAI with API key, and in other programs it wirks fast without any delay…

After execution my OpenAI bubble not colored green

And Inside it there is no output data info (however it exist in AI Agent output

the issue is probably related to the fact you’re using some “proxy” and that proxy doesn’t implement all required methods or your network cannot access it for some reason;

The easiest workaround:

just remove N8nLlmTracing from packages/@n8n/nodes-langchain/nodes/llms/LMChatOpenAi/LmChatOpenAi.node.ts

3 Likes

This worked like a charm!

Removing the callbacks speeded up every request by about 3 minutes (!).

Why is that causing such a slow down?

Thank you anyway!

I am using the hosted cloud version of n8n. Can this solution be applied to that or is this just for locally hosted versions?

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.