Diminished results with AI + a simple fix

Has anyone ever noticed that conversational AI produces different outputs depending on the environment it’s built in?

This isn’t a N8N specific issue, however I have experienced this with N8N when setting up conversational AI agents.

And my suspicions are it’s something to do with how Langchain manages chat memory.

Specifically what I’ve noticed:

  • Less likely to follow rules

  • Will occasionally output as the user

  • Outputs can be less conversational (think GPT 3.5 outputs even when using GPT 4 Turbo)

I’m not a novice when it comes to building AI agents and have been responsible for over 200k AI chats using various tools. It could be that previous tools used hidden prompts to direct the AI how to use chat memory.

And N8N is by far my favourite platform to build AI agents. A simple fix is to just use Open AI assistants and results return to normal.

But I’d love to hear if anyone knows the answer as to why different environments can produce different results.

Settings:
Xata as memory
Open AI GPT 4 Turbo
Temp 0.5
Top P 0.5

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Hey @JamieW,

I think this is just one of the quirks of the current state of AI and LLMs in general, There are some options you can tweak but getting your prompt nailed I think is the most important part. That and using your own data where possible.