N8N - Self Hosted - AI Agent Node timeout after 60s

Describe the problem/error/question

Problem in node ‘AI Agent1‘
The resource you are requesting could not be found
After 60s the AI Agent Node throw this error. Settings in docker-compose

  • EXECUTIONS_TIMEOUT=300 # 5 minutes
  • EXECUTIONS_TIMEOUT_MAX=600 # 10 minutes max
  • N8N_DEFAULT_TIMEOUT=120000 # 2 minutes in milliseconds
    Chat Model Node with higher timeout doesn’t change anything.

What is the error message (if any)?

Problem in node ‘AI Agent1‘
The resource you are requesting could not be found

Please share your workflow

Share the output returned by the last node

Problem in node ‘AI Agent1‘
The resource you are requesting could not be found

Information on your n8n setup

  • n8n version: 2.12.3
  • Database (default: SQLite): Default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): ??
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Debian

welcome to the n8n community @GudeAndi
because n8n’s execution timeout settings control overall workflow runtime rather than a provider-specific 60-second request limit, I would first upgrade to 2.15 (stable version), then check the AI model/provider node and any reverse proxy for a 60s request timeout and increase that there, since this does not look like EXECUTIONS_TIMEOUT is the setting actually stopping the AI Agent call.

That error is a 404 not a timeout, your local LLM server probably isn’t returning the model at the endpoint n8n expects. Check that the base URL in your OpenAI credentials uses the docker network hostname and hit /v1/models on your inference server to make sure the model name matches exactly

Hi @GudeAndi Welcome!
I saw your openAI node and it says gemma-4-E4B-it-UD-Q5_K_XL.gguf , i think you should consider a inference provider like GROQ or OpenRouter and then let me know if the issue persists.

1 Like

Do you happen to be running task-runners?

Any errors?

1 Like

That’s what n8n says. I don’t interpret anything. Also every conversation within 60s is working fine. The model doesn’t matter.

I’m sorry. What’s task runners? I got my ai agent node added a model node with link against my llama.cpp and let it run. As I’m thinking about it…maybe the timeout coming from nginx. Thanks for the idea. Gonna take a look tomorrow.

How do you run n8n? Are you using docker?

what does typing docker ps in your command line show?

As stated in the description I’m using docker.

Thanks for the hints. The issue was the standard nginx timeout. Problem solved.

1 Like