Hello!
I’ve noticed that data can get truncated when using an AI Agent. For example, if a tool returns a lot of JSON, the agent might see a handful of the records, but not all of them. This isn’t surprising and I’ve found ways to work around it.
However, I’d love to know what’s causing the data to get truncated. Does Window Buffer have a maximum size for each window of information? Or, when a tool returns JSON to the AI Agent, is there a limit to the size of the data that’s returned?
This question is not related to a specific workflow. I’m writing a tutorial on how to use document storage (MongoDB) to circumvent such limitations, and I want to get my facts right.
Thanks!
- n8n version: 1.63.4
- Database (default: SQLite): default
- n8n EXECUTIONS_PROCESS setting (default: own, main): default
- Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
- Operating system: n/a