Describe the problem/error/question
I am experiencing this issue on cloud for a couple days now. I am not running anything excessive, do not save any executions, and i have deleted workflows that had downloaded binary files a few days ago which i had thought caused this. I have deleted 80% of the executions from my instance. But a few days later, even running this workflow is returning that error. I tried to look into manual data pruning but couldn’t find anything i could do. When i run it manually, it works fine.
What is the error message (if any)?
n8n may have run out of memory while running this execution.
My workflow
Information on your n8n setup
- n8n version: [email protected]
- n8n EXECUTIONS_PROCESS setting (default: own, main): do not save
- Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
- Operating system: windows 11
Hello @AutoAge_AI,
You’re likely hitting memory issues because the agent is loading multiple tools. Even if the flow looks small, each execution is heavy.
Here are some option imo to improve it:
- Use a sub-workflow for vector retrieval: move navigation_kb and content_kb into a separate workflow that just returns cleaned-up text. That way the agent handles less logic directly.
- Avoid recalculating embeddings: if your documents don’t change often, store pre-generated embeddings instead of embedding each time.
- Make Tavily optional: trigger Tavily only when needed (e.g. if user query asks for news or external updates), not every time.
- Switch to an MCP: Instead of using an AI agent with tools, do each step manually:
- Run vector searches and optional web lookup first,
- Then combine results + system instructions into one clean GPT-4 call.
- This gives you full control, uses less memory, and avoids dynamic tool overhead.
- Watch token limits: some of these tools return huge chunks of text. Add length checks or trim before feeding into the model.