Memory Node Interfering with AI Output in n8n

I am experiencing an issue where connecting the Postgres Chat Memory node in my n8n workflow leads to an incorrect output from the CheckerAI (LangChain AI Agent). When I remove the memory node, the AI provides the expected and correct output.

Expected Behavior:

  • The AI should correctly process the compliance checks and return accurate warnings.
  • Memory should enhance context retention without modifying the logic of the AI’s response.

Actual Behavior:

  • With memory enabled, the AI incorrectly adds extra warnings (e.g., Truck weight exceeds maximum allowed limit even when it shouldn’t).
  • With memory removed, the AI returns the correct set of warnings

Outputs:

Incorrect Output (with Memory)

{
  "output": {
    "warnings": [
      "Restricted commodity: alcohol",
      "Missing required permit: hazmat",
      "Truck weight exceeds maximum allowed limit"
    ]
  }
}

Correct Output (without Memory)

{
  "output": {
    "warnings": [
      "Restricted commodity: alcohol",
      "Missing required permit: hazmat"
    ]
  }
}

Environment Details:

  • n8n Version: 1.73
  • Database Used for Memory: Postgres
  • Deployment: Self-hosted

Do you try explicitly describe how memory should be used within the system prompt?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.