seems a reply based on system prompt. Check user prompt field on ai node, it think it gets an empty user prompt.
In logs you should see 1 system prompt (you have) AND user prompt. In your case is a chatInput “Quelles sont les 3 operacions de productions à reinsegner?”
To check if the error is what i’m saying, try to hardcode the user prompt adding it on AI Agent node in place of ChatInput
Hey there, I don’t follow you, I can see the chatInput in my two last screenshots of the initial AI Cloud Chat Model and in the one that retrieves the informations from the database.
I’m using the mistral-small-latest model which supposedly have tools working.
I will try another model tomorrow.
It works with pixtral-12b… I don’t get it. Why. There must be a bug on n8n for the model to not use the answer appropriately because this is exactly the same input he gets and it works fine as per Mistral’s documentation.
I have the exact same issue. A working workaround is to append this to your system prompt:
IMPORTANT: Your full answer needs to be prefixed with "AI:" - otherwise it will be rejected.
My thought process was that the LLM runs multiple times - on the last execution, the input is your prompt plus the tool output appended (e.g. “Tool: ”).
The LLM then feels like this is a sufficient answer for the question and that it’s unlikely for more information to be added on top of that, but is also forced to generate more than 0 tokens, so it adds some stupid stuff like “let me know if there is anything else I can do to help”.
Still feels a bit like it’s a bug in n8n that it supplies the input in this way to mistral since this is bound to lead to errors and apparently works more often than not unless you specifically circumvent it in your prompt.
Disadvantage of the workaround is that you need to remove the AI prefix afterwards for example by using a regex replace.
I assume it might work without the workaround in more complex calls where the rag call isn’t the only necessary tool use and the RAG response isn’t the whole response to the user query.
Is there a different behavior using ChatGPT ? Maybe they should redirect the AI output the same way it’s done using ChatGPT if it works better.
I’ll take a look, but if anyone find an issue or pull-request on n8n repository, let’s link it here. Otherwise it might be usefull to create one on our own !