Does n8n Limit the Number of Output Tokens When Using OpenRoute?

Hello! I’m integrating OpenRoute with n8n to work with large language models (LLMs). However, I’m facing a potential output token limitation: although the LLM I’m using supports up to 8000 tokens, the response is always capped around 2000 tokens when triggered via n8n.

I want to understand:

  • Is there an output token limit imposed by OpenRoute within n8n?
  • If yes, is it possible to bypass or configure this limit (e.g., via API parameters or node settings)?

Additionally, I’d love to know how others structure their prompts/workflows when doing LLM-based research in n8n. I’m especially interested in setting tasks clearly, aligning priorities, and improving collaboration with PMs during LLM-heavy workflows.

Any advice would be greatly appreciated!

Hi, in the openrouter chat model, you can press add option. Then in the dropdown, a setting called maximum number of tokens can be picked. When it’s added it defaults to -1 which should hopefully fix it.