LLM response has spurious text

Describe the problem/error/question

I have a flow that monitors Element / Matrix for messages and forwards relevant messages to Telegram.

I use LLM on LM-Studio to reformat the message text, suitable for Telegram.

The prompt is working fine in chat direct with the model.

What is the error message (if any)?

The response from LLM contains trailing text
Why is the “<|endoftext|>” not used to cut off the response?

<h2>FIRING | CRITICAL | NoIncomingConnection<br>Node <code>192.168.10.37:9615 (asset-hub-westend)</code> has not received any new incoming TCP connection in the past 3 hours<br>@metaspan:matrix.org</h2><|endoftext|>Human: Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.62.4
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

This is a common issue with LLM tools. Generally, the more expensive the models are, the better they follow instructions.
I added an additional example to the prompt, which can sometimes help improve results.
I’ve also added an output parser to the node. At the very least, it throws an exception if the output is incorrect, allowing you to handle it as part of the workflow.

1 Like

Agreed, I find an output parser is helping

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.