TypeError: text.trim is not a function at N8nStructuredOutputParser.parse (/home/abhay/.nvm/versions/node/v22.16.0/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/utils/output_parsers/N8nStructuredOutputParser.ts:69:10) at N8nOutputFixingParser.parse (/home/abhay/.nvm/versions/node/v22.16.0/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/utils/output_parsers/N8nOutputFixingParser.ts:42:45) at RunnableLambda.func (/home/abhay/.nvm/versions/node/v22.16.0/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/nodes/agents/Agent/agents/ToolsAgent/common.ts:248:45) at /home/abhay/.nvm/versions/node/v22.16.0/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:1798:44 at AsyncLocalStorage.run (node:internal/async_local_storage/async_hooks:91:14) at AsyncLocalStorageProvider.runWithConfig (/home/abhay/.nvm/versions/node/v22.16.0/lib/node_modules/n8n/node_modules/@langchain/core/dist/singletons/async_local_storage/index.cjs:60:24) at output (/home/abhay/.nvm/versions/node/v22.16.0/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:1796:64) at new Promise (<anonymous>) at RunnableLambda._transform (/home/abhay/.nvm/versions/node/v22.16.0/lib/node_modules/n8n/node_modules/@langchain/core/dist/runnables/base.cjs:1795:30) at processTicksAndRejections (node:internal/process/task_queues:105:5)
Information on your n8n setup
n8n version: 1.102.0
Database (default: SQLite): PostgreSQL
n8n EXECUTIONS_PROCESS setting (default: own, main): I don’t know
Running n8n via (Docker, npm, n8n cloud, desktop app): npm
Operating system: Fedora Linux 42 (Workstation Edition)
I’ve been getting the same error, it doesn’t always occurs, but I noticed something, it happens with Gemini models when tool calling or JSON output parser is required.
I get that error using Gemini 2.5 Pro with a Rag connected to it, it looks like the model tries to parse the output using a JS function that is not available, “text.trim”, its frustrating that the LLM doesn’t understand how to perform correctly, maybe its more a problem of N8N itself, that doesn’t give enough context to the model.
Or maybe its just the model, let me know if u usinig gemini 2.5 models, sincerely i’ve never seen GPT models fail with this type of things, neither Anthropic models.
I am using Flash as well. Flash 2.5, still this happens… I have just decided to use Basic LLM node after AI agent for getting data… I don’t face this bug on Basic LLM…
I’m receiving this error as well. I’m pretty sure it’s just a bug with the node’s code. What I think is happening is that when using an output parser alongside Gemini you’re telling Gemini to switch to JSON mode. It outputs as JSON as a result and the node itself is expecting a text so it runs text.trim to remove excess whitespace, like a space. This obviously won’t work since JSON is being passed within text and JSON doesn’t have a .trim function because that’s specifically for text. So the response is being incorrectly handle by a node. Until this issue gets fixed, the only real solution that I know of, outside of simply not using an AI Agent node and/or Gemini, is to stop using the output parser and letting it output JSON as a string and then you can parse that string as a JSON later, through a code node right after the AI Agent node for example.
If I’m right about this it really should be a simple fix, it’s just updating the code for the node to properly handle JSON output, or other types of outputs, instead of just text. This is a bug that NEEDS to be fixed sooner rather than later!
Edit: So actually I’m wrong, it looks like this error might be caused by langchain itself…