Problem in node ‘AI Agent1‘
The resource you are requesting could not be found
After 60s the AI Agent Node throw this error. Settings in docker-compose
EXECUTIONS_TIMEOUT=300 # 5 minutes
EXECUTIONS_TIMEOUT_MAX=600 # 10 minutes max
N8N_DEFAULT_TIMEOUT=120000 # 2 minutes in milliseconds
Chat Model Node with higher timeout doesn’t change anything.
What is the error message (if any)?
Problem in node ‘AI Agent1‘
The resource you are requesting could not be found
Please share your workflow
Share the output returned by the last node
Problem in node ‘AI Agent1‘
The resource you are requesting could not be found
welcome to the n8n community @GudeAndi
because n8n’s execution timeout settings control overall workflow runtime rather than a provider-specific 60-second request limit, I would first upgrade to 2.15 (stable version), then check the AI model/provider node and any reverse proxy for a 60s request timeout and increase that there, since this does not look like EXECUTIONS_TIMEOUT is the setting actually stopping the AI Agent call.
That error is a 404 not a timeout, your local LLM server probably isn’t returning the model at the endpoint n8n expects. Check that the base URL in your OpenAI credentials uses the docker network hostname and hit /v1/models on your inference server to make sure the model name matches exactly
Hi @GudeAndi Welcome!
I saw your openAI node and it says gemma-4-E4B-it-UD-Q5_K_XL.gguf , i think you should consider a inference provider like GROQ or OpenRouter and then let me know if the issue persists.
I’m sorry. What’s task runners? I got my ai agent node added a model node with link against my llama.cpp and let it run. As I’m thinking about it…maybe the timeout coming from nginx. Thanks for the idea. Gonna take a look tomorrow.