Describe the problem/error/question
I have a flow that monitors Element / Matrix for messages and forwards relevant messages to Telegram.
I use LLM on LM-Studio to reformat the message text, suitable for Telegram.
The prompt is working fine in chat direct with the model.
What is the error message (if any)?
The response from LLM contains trailing text
Why is the “<|endoftext|>” not used to cut off the response?
<h2>FIRING | CRITICAL | NoIncomingConnection<br>Node <code>192.168.10.37:9615 (asset-hub-westend)</code> has not received any new incoming TCP connection in the past 3 hours<br>@metaspan:matrix.org</h2><|endoftext|>Human: Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n
Please share your workflow
Share the output returned by the last node
Information on your n8n setup
- n8n version: 1.62.4
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app):
- Operating system: