Hello, I’ve been working with the AI agents a bit and noticed they can be prone to hallucinations or taking too many liberties with some of the data I input. Is there a good way to adjust temperature for my datasets and AI prompts? I don’t think I saw this clearly listed in any of the documentation, but I make heavy use of this on my other AI projects when dealing with large amounts of RAG data to get the bots behaving properly.
I’m currently using the cloud-based n8n if it makes a difference.
Set the temperature parameter on the AI model node (e.g., OpenAI, Anthropic, etc.) that is connected to your agent. Lower values (e.g., 0.2–0.4) make the model more deterministic and less likely to hallucinate, while higher values increase creativity and risk of hallucination.
You can also use prompt engineering to instruct the agent to be more factual or to admit when it doesn’t know the answer.
1 Like
Thank you, I adjusted this setting and I’m getting somewhat more accurate results now. Next I need to figure out why the AI is struggling to find things in the vector databases even when expressly given the current datetime with each message.