Chat memory manager integration

I am confused on how to add the chat memory manager to an AI node such as there questions and answering chain since it does not support memory. I do not want to switch to an AI tools agent. I need to use the questions and answering chain

No error message just not giving me an output as I am confused on how to integrate memory properly

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.) 

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.89
  • Database (default: SQLite): OneDrive
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system: macOS

Hello klrlN8N
I hope you are well!

The Simple Memory node is linked to the “Question and Answer Chain”. ?
If not, configure the node to store and retrieve the interaction history (user messages and AI responses) and pass this history back to the AI ​​model

Possible adjustments can be made, such as in the “Simple Memory” node, configure keys/values ​​that store messages

{
  "user_message": {{$json["inputMessage"]}},
  "ai_message": {{$json["aiMessage"]}}
}

Inside the “Question and Answer Chain” node, link the memory output using inputs like

{
  "memory": {{$memory.previous_conversations}}
}

Define a flow to retrieve the history from the Simple Memory node and concatenate it with the current interaction.
Set this to “Insert Messages from AI/User” to maintain ongoing context.

In the first step of the Webhook, load the history from memory.
After the interaction with the AI ​​model, insert the new message into the history.

The Vector Store Retriever (highlighted in the flow) can be used to fetch relevant information from previous context.

Add the full history retrieved to the Vector Store Retriever, formatting the input like this:
json

{
  "input": "Contexto anterior: {{$memory.previous_conversations}}. Nova entrada: {{$json["user_message"]}}"
}

Make sure the following nodes are connected correctly:
“Simple Memory” receives previous messages and returns the history to Q&A.
“Insert Messages from AI/User” updates the memory with the new context.

Final Configuration Checklist
Memory Enabled: Make sure the “Simple Memory” node is configured to store the history.
Q&A Connection: Verify that the “Question and Answer Chain” is using the memory history.
Continuous Update: Confirm that each interaction updates the storage correctly in the “Insert Messages from AI/User” nodes.

I hope I have helped in some way.

What I am having trouble with is that its not able to insert messages inside the memory storage

How am I supposed to edit this field properly


You sent the image above in SCHEMA format.
Send it again in JSON format

“name”: “Test”,
“nodes”: [
{
“parameters”: {},
“type”: “n8n-nodes-base.manualTrigger”,
“typeVersion”: 1,
“position”: [
-900,
-240
],
“id”: “b3d5f8ec-161c-4b05-b1a8-d553635e6bf6”,
“name”: “When clicking ‘Test workflow’”
},
{
“parameters”: {
“options”: {
“reset”: false
}
},
“type”: “n8n-nodes-base.splitInBatches”,
“typeVersion”: 3,
“position”: [
-240,
-240
],
“id”: “8812bc64-48b2-477c-9324-1ece64113d58”,
“name”: “Loop Over Items”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.embeddingsOpenAi”,
“typeVersion”: 1.2,
“position”: [
-20,
-20
],
“id”: “8db28cfa-237c-48cc-b88e-22d92256c576”,
“name”: “Embeddings OpenAI”,
“credentials”: {
“openAiApi”: {
“id”: “JOpHgYldTT1RHOaE”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“dataType”: “binary”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.documentDefaultDataLoader”,
“typeVersion”: 1,
“position”: [
100,
-17.5
],
“id”: “a3becf09-5601-4c7e-8453-f42b131bd909”,
“name”: “Default Data Loader”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter”,
“typeVersion”: 1,
“position”: [
188,
180
],
“id”: “e800e65d-f166-4492-909d-96f9a7dc3925”,
“name”: “Recursive Character Text Splitter”
},
{
“parameters”: {
“mode”: “insert”
},
“type”: “@n8n/n8n-nodes-langchain.vectorStoreInMemory”,
“typeVersion”: 1.1,
“position”: [
-4,
-240
],
“id”: “0cfc57e2-d976-49e6-9273-78eb2aab1b9a”,
“name”: “Simple Vector Store”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.chatTrigger”,
“typeVersion”: 1.1,
“position”: [
-900,
1037.5
],
“id”: “fdf4ca07-fbab-4987-9ba5-8a4deb0bbbb1”,
“name”: “When chat message received”,
“webhookId”: “37a9cbf9-7c5a-44cd-bb68-405472400b75”
},
{
“parameters”: {},
“type”: “@n8n/n8n-nodes-langchain.vectorStoreInMemory”,
“typeVersion”: 1.1,
“position”: [
-184,
1457.5
],
“id”: “cf89051b-251b-4b59-97e0-f0740ff9e996”,
“name”: “Simple Vector Store1”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.embeddingsOpenAi”,
“typeVersion”: 1.2,
“position”: [
-96,
1652.5
],
“id”: “a325aa03-8183-43ba-abb1-21be663d9450”,
“name”: “Embeddings OpenAI1”,
“credentials”: {
“openAiApi”: {
“id”: “JOpHgYldTT1RHOaE”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“resource”: “folder”,
“folderId”: “01C7YEKCBKQVRTFOEMANFICAR27K3CWD5A”
},
“type”: “n8n-nodes-base.microsoftOneDrive”,
“typeVersion”: 1,
“position”: [
-680,
-240
],
“id”: “97a99239-913b-4a99-b6c1-a0bf0b274e8a”,
“name”: “Microsoft OneDrive4”,
“credentials”: {
“microsoftOneDriveOAuth2Api”: {
“id”: “YKrv9ZdRbB2Ep1Us”,
“name”: “Microsoft Drive account”
}
}
},
{
“parameters”: {
“operation”: “download”,
“fileId”: “={{ $json["id"] }}\n”
},
“type”: “n8n-nodes-base.microsoftOneDrive”,
“typeVersion”: 1,
“position”: [
-460,
-240
],
“id”: “d85c6b0f-8e03-4040-87ab-e36e771c2b83”,
“name”: “Microsoft OneDrive5”,
“credentials”: {
“microsoftOneDriveOAuth2Api”: {
“id”: “YKrv9ZdRbB2Ep1Us”,
“name”: “Microsoft Drive account”
}
}
},
{
“parameters”: {
“promptType”: “define”,
“text”: “={{ $(‘When chat message received’).item.json.chatInput }}”,
“options”: {
“systemPromptTemplate”: “=You are a question-answering assistant. Use both the conversation history below and any retrieved documents to craft your answer.\n\n── Conversation history ──\n{{ $json["Chat Memory Manager"].messages\n .map(m => m.ai)\n .join("\n\n") }}\n\n── Retrieved context ──\n{context}\n\nUser’s question:\n{{ $json["When chat message received"].chatInput }}\n”
}
},
“type”: “@n8n/n8n-nodes-langchain.chainRetrievalQa”,
“typeVersion”: 1.5,
“position”: [
-288,
1037.5
],
“id”: “ba6cc6f4-5997-4e52-bd06-9e838f61aa21”,
“name”: “Question and Answer Chain”,
“notesInFlow”: false
},
{
“parameters”: {},
“type”: “@n8n/n8n-nodes-langchain.retrieverVectorStore”,
“typeVersion”: 1,
“position”: [
-184,
1260
],
“id”: “9d14085a-5423-48f2-adfb-b862beae3948”,
“name”: “Vector Store Retriever”
},
{
“parameters”: {
“model”: {
“__rl”: true,
“mode”: “list”,
“value”: “gpt-4o-mini”
},
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“typeVersion”: 1.2,
“position”: [
-380,
1220
],
“id”: “b57dec36-934d-44f7-a22c-64b344d509bd”,
“name”: “OpenAI Chat Model2”,
“credentials”: {
“openAiApi”: {
“id”: “JOpHgYldTT1RHOaE”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“respondWith”: “text”,
“responseBody”: “={{$json["body"]}}\n”,
“options”: {
“responseCode”: 200,
“responseHeaders”: {
“entries”: [
{
“name”: “Name: Content-Type”,
“value”: “Value: text/plain; charset=utf-8”
}
]
}
}
},
“type”: “n8n-nodes-base.respondToWebhook”,
“typeVersion”: 1.1,
“position”: [
-900,
480
],
“id”: “a0604cb8-349e-405f-a000-cc6114b0f385”,
“name”: “Respond to Webhook1”
},
{
“parameters”: {
“assignments”: {
“assignments”: [
{
“id”: “2a08d48f-876b-45d2-b505-9a7ea4d04408”,
“name”: “output”,
“value”: “={{ $json.response }}”,
“type”: “string”
}
]
},
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 3.4,
“position”: [
270,
1187.5
],
“id”: “e7b3cda8-94e1-43ce-98e3-0b26f4ac86c2”,
“name”: “Edit Fields”
},
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.memoryManager”,
“typeVersion”: 1.1,
“position”: [
-680,
1037.5
],
“id”: “a3dc3e69-1640-4a49-9024-773676a73c1b”,
“name”: “Chat Memory Manager”
},
{
“parameters”: {
“sessionIdType”: “customKey”,
“sessionKey”: “345”
},
“type”: “@n8n/n8n-nodes-langchain.memoryBufferWindow”,
“typeVersion”: 1.3,
“position”: [
-200,
680
],
“id”: “80215ac1-6950-466d-9a28-ab6a7a203c2d”,
“name”: “Simple Memory”
},
{
“parameters”: {
“mode”: “insert”
},
“type”: “@n8n/n8n-nodes-langchain.memoryManager”,
“typeVersion”: 1.1,
“position”: [
260,
700
],
“id”: “35dd13e3-f9df-4204-8054-81886943ec53”,
“name”: “Chat Memory Manager1”
},
{
“parameters”: {
“sessionIdType”: “customKey”,
“sessionKey”: “42069”
},
“type”: “@n8n/n8n-nodes-langchain.memoryBufferWindow”,
“typeVersion”: 1.3,
“position”: [
-2280,
1440
],
“id”: “b86bf32c-1019-4ed0-b5d8-1d95894ada22”,
“name”: “Simple Memory1”
},
{
“parameters”: {},
“type”: “n8n-nodes-base.merge”,
“typeVersion”: 3.1,
“position”: [
-900,
220
],
“id”: “fbfe37cf-f72f-4953-be96-91e7939a6f8f”,
“name”: “Merge”
}
],
“pinData”: {},
“connections”: {
“When clicking ‘Test workflow’”: {
“main”: [
[
{
“node”: “Microsoft OneDrive4”,
“type”: “main”,
“index”: 0
}
]
]
},
“Loop Over Items”: {
“main”: [
,
[
{
“node”: “Simple Vector Store”,
“type”: “main”,
“index”: 0
}
]
]
},
“Embeddings OpenAI”: {
“ai_embedding”: [
[
{
“node”: “Simple Vector Store”,
“type”: “ai_embedding”,
“index”: 0
}
]
]
},
“Default Data Loader”: {
“ai_document”: [
[
{
“node”: “Simple Vector Store”,
“type”: “ai_document”,
“index”: 0
}
]
]
},
“Recursive Character Text Splitter”: {
“ai_textSplitter”: [
[
{
“node”: “Default Data Loader”,
“type”: “ai_textSplitter”,
“index”: 0
}
]
]
},
“Simple Vector Store”: {
“main”: [
[
{
“node”: “Loop Over Items”,
“type”: “main”,
“index”: 0
}
]
]
},
“When chat message received”: {
“main”: [
[
{
“node”: “Chat Memory Manager”,
“type”: “main”,
“index”: 0
}
]
]
},
“Simple Vector Store1”: {
“ai_vectorStore”: [
[
{
“node”: “Vector Store Retriever”,
“type”: “ai_vectorStore”,
“index”: 0
}
]
]
},
“Embeddings OpenAI1”: {
“ai_embedding”: [
[
{
“node”: “Simple Vector Store1”,
“type”: “ai_embedding”,
“index”: 0
}
]
]
},
“Microsoft OneDrive4”: {
“main”: [
[
{
“node”: “Microsoft OneDrive5”,
“type”: “main”,
“index”: 0
}
]
]
},
“Microsoft OneDrive5”: {
“main”: [
[
{
“node”: “Loop Over Items”,
“type”: “main”,
“index”: 0
}
]
]
},
“Vector Store Retriever”: {
“ai_retriever”: [
[
{
“node”: “Question and Answer Chain”,
“type”: “ai_retriever”,
“index”: 0
}
]
]
},
“OpenAI Chat Model2”: {
“ai_languageModel”: [
[
{
“node”: “Question and Answer Chain”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Question and Answer Chain”: {
“main”: [
[
{
“node”: “Edit Fields”,
“type”: “main”,
“index”: 0
},
{
“node”: “Chat Memory Manager1”,
“type”: “main”,
“index”: 0
}
]
]
},
“Edit Fields”: {
“main”: [

]
},
“Simple Memory”: {
“ai_memory”: [
[
{
“node”: “Chat Memory Manager”,
“type”: “ai_memory”,
“index”: 0
},
{
“node”: “Chat Memory Manager1”,
“type”: “ai_memory”,
“index”: 0
}
]
]
},
“Chat Memory Manager”: {
“main”: [
[
{
“node”: “Question and Answer Chain”,
“type”: “main”,
“index”: 0
}
]
]
},
“Simple Memory1”: {
“ai_memory”: [

]
},
“Respond to Webhook1”: {
“main”: [

]
},
“Chat Memory Manager1”: {
“main”: [

]
}
},
“active”: false,
“settings”: {
“executionOrder”: “v1”
},
“versionId”: “4fd292a9-16cb-4f49-ac0e-3a53aa6c582e”,
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “274690a662ddcf878f1a5082b9b826fa420de465650222dc51b254869704139c”
},
“id”: “1iWDWVmfpoPqFldI”,
“tags”:

Does this work

Please test this version in a separate workflow. Read carefully.

This workflow uses vectorStoreInMemory, which is volatile (data is lost when N8N restarts). I will upload another workflow with higher persistence after this one.

The version below fixes formatting issues (such as curly quotes preventing import), and adjusts the expressions and configuration of Langchain nodes, including memory management.

Replaced curly quotes (“, ”) with standard double quotes (") to ensure JSON validity and proper import into N8N.

Fixed the OneDrive document upload and vectorization flow. It now lists the files, downloads each one, processes them (data loader, splitter), and inserts them into the Vector Store. The original SplitInBatches node seemed misplaced and was removed in favor of N8N’s implicit iteration over OneDrive items.

Memory is now fetched before the QA chain.

Set the memory’s sessionKey to use the Chat Trigger node’s sessionId (assuming it provides one), making the memory specific to each chat session.

Fixed accessing memory history from the QA chain prompt to correctly map to role and content.

Added a Set node (“Prepare Turn for Memory”) to correctly format the user’s question and AI response before saving to memory.

Chat Memory node Manager to save memory (upsert) now uses the formatted input from the Set node.

The Respond To Webhook node is now connected to send the AI ​​response back.

Nodes like Merge and Simple Memory1 that were not connected have been removed.

Expressions in nodes like Microsoft OneDrive5 (fileId), Retrieval QA Chain (prompt), Simple Memory (sessionKey), Respond To Webhook (responseBody) have been fixed.

Node Names: Some names have been adjusted for clarity (e.g. “Vector Store (Upsert)”).

Important Notes…

Credentials: Make sure the Credentials IDs are correct and that the credentials are active on your N8N.

This workflow uses vectorStoreInMemory, which is volatile (data is lost when N8N restarts).

In the case of InMemory, they implicitly use the same in-memory storage during workflow execution.

Session ID: The memory now depends on the sessionId provided by Chat Trigger. Make sure your frontend or the way you call the Chat Trigger webhook sends a unique session ID for each conversation.

The OPEN AI embedding model is text-embedding-ada-002 and the chat model is gpt-4o-mini. Make sure these are the models you want and that your OpenAI account has access to them.

Run the ingestion part (using Manual Trigger) first to load the documents into Vector Store. Then, test the chat flow by sending messages to the Chat Trigger endpoint.

Ok thanks so much bro. I will test it and let you know. Again thank you so much for your help

Hello, could you kindly mark my previous post as the solution (blue box with check mark) so that this ongoing discussion does not distract others who want to find out the answer to the original question? Thanks.

unfortunately it didn’t work. I think you need to load in the AI system prompt somehow

It seems that the Chat Memory Manager is not returning the history in the format that the Question and Answer Chain expects (a list of messages with role and content).

In the Chat Memory Manager, saved messages should be in the following format (as a list of objects)

[
{ "role": "user", "content": "How to configure N8N?" },
{ "role": "assistant", "content": "You can configure N8N using Docker." }
]

If the data in memory is not in this format, the Question and Answer Chain node will not be able to use it correctly.

Test:

Open the Chat Memory Manager (Upsert or Get).

Go to the “Execution Data” tab after executing it.

Check the return and see how the message data is structured.

Fix the Expression in the Question and Answer Chain Node
The current expression seems to be trying to access data in a confusing way. Replace the expression in the System Prompt Template with something more robust. Use this updated example


You are a question-answering assistant. Use both the conversation history below and any retrieved documents to craft your answer.

── Conversation History ──
{{ $json["memoryMessages"]?.map(m => `${m.role}: ${m.content}`).join("\n") || "No history available" }}

── Retrieved Content ──
{{ $json["documents"] || "No relevant documents retrieved" }}

User question:
{{ $json["chatInput"] }}

I replaced direct accesses ($json[“Chat Memory Manager”]) with references to the memoryMessages field, which is what the system expects. If something is empty or missing, a fallback (“No history available”) will be displayed, preventing errors.

Make sure below…

The Chat Memory Manager node is returning the correct data and is connected to the Question and Answer Chain node.
History is being saved to the Chat Memory Manager node (Upsert) after interaction.

The expression is correct I think but I think Im missing something.

I am trying something super simple like this without the vector storage to find the root of the problem {
“nodes”: [
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.chatTrigger”,
“typeVersion”: 1.1,
“position”: [
-1380,
940
],
“id”: “fdf4ca07-fbab-4987-9ba5-8a4deb0bbbb1”,
“name”: “When chat message received”,
“webhookId”: “37a9cbf9-7c5a-44cd-bb68-405472400b75”
},
{
“parameters”: {
“model”: {
“__rl”: true,
“mode”: “list”,
“value”: “gpt-4o-mini”
},
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“typeVersion”: 1.2,
“position”: [
-720,
1380
],
“id”: “b57dec36-934d-44f7-a22c-64b344d509bd”,
“name”: “OpenAI Chat Model2”,
“credentials”: {
“openAiApi”: {
“id”: “JOpHgYldTT1RHOaE”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“mode”: “insert”
},
“type”: “@n8n/n8n-nodes-langchain.memoryManager”,
“typeVersion”: 1.1,
“position”: [
-700,
760
],
“id”: “a3dc3e69-1640-4a49-9024-773676a73c1b”,
“name”: “Chat Memory Manager”
},
{
“parameters”: {
“sessionIdType”: “customKey”,
“sessionKey”: “345”
},
“type”: “@n8n/n8n-nodes-langchain.memoryBufferWindow”,
“typeVersion”: 1.3,
“position”: [
-980,
1300
],
“id”: “80215ac1-6950-466d-9a28-ab6a7a203c2d”,
“name”: “Simple Memory”
},
{
“parameters”: {
“mode”: “insert”
},
“type”: “@n8n/n8n-nodes-langchain.memoryManager”,
“typeVersion”: 1.1,
“position”: [
-1100,
940
],
“id”: “35dd13e3-f9df-4204-8054-81886943ec53”,
“name”: “Chat Memory Manager1”
},
{
“parameters”: {
“promptType”: “define”,
“text”: “=You are a friendly chatbot. refer to {{ $json["memoryMessages"]?.map(m => ${m.role}: ${m.content}).join("\n") || "No history available" }}\n”
},
“type”: “@n8n/n8n-nodes-langchain.chainLlm”,
“typeVersion”: 1.6,
“position”: [
-480,
900
],
“id”: “11ea61bf-7dab-4249-ba3c-d850b3162f4f”,
“name”: “Basic LLM Chain”
}
],
“connections”: {
“When chat message received”: {
“main”: [
[
{
“node”: “Chat Memory Manager1”,
“type”: “main”,
“index”: 0
}
]
]
},
“OpenAI Chat Model2”: {
“ai_languageModel”: [
[
{
“node”: “Basic LLM Chain”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Chat Memory Manager”: {
“main”: [

]
},
“Simple Memory”: {
“ai_memory”: [
[
{
“node”: “Chat Memory Manager1”,
“type”: “ai_memory”,
“index”: 0
},
{
“node”: “Chat Memory Manager”,
“type”: “ai_memory”,
“index”: 0
}
]
]
},
“Chat Memory Manager1”: {
“main”: [
[
{
“node”: “Basic LLM Chain”,
“type”: “main”,
“index”: 0
}
]
]
},
“Basic LLM Chain”: {
“main”: [
[
{
“node”: “Chat Memory Manager”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “274690a662ddcf878f1a5082b9b826fa420de465650222dc51b254869704139c”
}
}

If there is any part in Portuguese, please excuse me. Remember that I am in Brazil and do not speak English, so I need to copy the user’s need and transfer it to a translator.
After translating from English to Portuguese, I take the user’s need and study the case to suggest possible corrections. This takes a lot of time, because I need to dedicate myself and look for ways to help those in need.
I am also a user and have no connection with N8N.
I am here dedicating my time to help and also to learn.

See if the suggestions below are useful to you.

You have two “Memory Manager” nodes that do not appear to be configured correctly in the flow.

The prompt formatting in the “Basic LLM Chain” may cause rendering issues.

Some connections between nodes are not configured.

There are no mechanisms to handle failures or errors.

You are using a fixed session key (“345”) instead of a dynamic one.

Simplify the Memory Flow

Add an “Error Trigger” node to catch and handle errors…

{
  "parameters": {},
  "type": "n8n-nodes-base.errorTrigger",
  "typeVersion": 1,
  "position": [
    -1380,
    1100
  ],
  "id": "error-handler",
  "name": "Tratamento de Erros"
},
{
  "parameters": {
    "content": "=## Erro no Processamento\n\nOcorreu um erro ao processar sua solicitação: {{$json.error.message}}\n\nPor favor, tente novamente.",
    "options": {}
  },
  "type": "n8n-nodes-base.set",
  "typeVersion": 1,
  "position": [
    -1100,
    1100
  ],
  "id": "format-error",
  "name": "Formatar Erro"
}```

Use a dynamic approach to session identification

// No nó “Memória de Conversa”
“sessionIdType”: “fromInput”,
“sessionKey”: “={{$json.sessionId || $json.chatId || $json.userId || $execution.id}}”


Add a "Function" node for logging

{
“parameters”: {
“functionCode”: “// Log da conversa atual\nconst memoryMessages = $input.item.json.memoryMessages || ;\nconst currentInput = $input.item.json.input;\n\nconsole.log([${new Date().toISOString()}] Session: ${$input.item.json.sessionId || 'unknown'} | Input: ${currentInput});\n\nreturn $input.item;”
},
“type”: “n8n-nodes-base.function”,
“typeVersion”: 1,
“position”: [
-600,
840
],
“id”: “debug-logger”,
“name”: “Log de Depuração”
}


Enhance your Prompt with Clear Instructions like, you are a friendly and helpful chatbot. Your job is to provide useful and accurate information.

Guidelines:
1. Answer clearly and concisely
2. If you don't know the answer, admit it honestly
3. Keep your tone conversational and friendly
4. Avoid long answers

Conversation history:
{{$json.memoryMessages}}

Current question: {{$json.input}}

You can also implement Rate Limiting by adding a node to control the frequency of requests to the OpenAI API.
You can implement a mechanism to collect feedback on responses.
You can add input validation to prevent malicious prompts.
You can configure fallback responses in case the OpenAI API fails.
And you can add nodes to track metrics like response time and success rate.

Below is a suggested improvement. Note that the nodes are disconnected so that you can do as you wish, this is just a suggestion.


{
“nodes”: [
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.chatTrigger”,
“typeVersion”: 1.1,
“position”: [
-1380,
940
],
“id”: “fdf4ca07-fbab-4987-9ba5-8a4deb0bbbb1”,
“name”: “Quando a mensagem de bate-papo foi recebida”,
“webhookId”: “37a9cbf9-7c5a-44cd-bb68-405472400b75”
},
{
“parameters”: {
“sessionIdType”: “fromInput”,
“sessionKey”: “={{$json.sessionId || $json.chatId || $json.userId || $execution.id}}”
},
“type”: “@n8n/n8n-nodes-langchain.memoryBufferWindow”,
“typeVersion”: 1.3,
“position”: [
-1120,
940
],
“id”: “80215ac1-6950-466d-9a28-ab6a7a203c2d”,
“name”: “Memória de Conversa”
},
{
“parameters”: {
“model”: {
“__rl”: true,
“mode”: “list”,
“value”: “gpt-4o-mini”
},
“options”: {
“temperature”: 0.7,
“maxTokens”: 2000
}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatOpenAi”,
“typeVersion”: 1.2,
“position”: [
-860,
940
],
“id”: “b57dec36-934d-44f7-a22c-64b344d509bd”,
“name”: “Modelo de bate-papo OpenAI”,
“credentials”: {
“openAiApi”: {
“id”: “JOpHgYldTT1RHOaE”,
“name”: “Conta OpenAi”
}
}
},
{
“parameters”: {
“functionCode”: “// Log da conversa atual\nconst memoryMessages = $input.item.json.memoryMessages || ;\nconst currentInput = $input.item.json.input;\n\nconsole.log([${new Date().toISOString()}] Mensagem recebida: ${currentInput});\n\nreturn $input.item;”
},
“type”: “n8n-nodes-base.function”,
“typeVersion”: 1,
“position”: [
-720,
1060
],
“id”: “debug-logger”,
“name”: “Log de Depuração”
},
{
“parameters”: {
“promptType”: “define”,
“text”: “Você é um chatbot amigável e eficiente. Responda à pergunta do usuário com base no histórico da conversa:\n\nHistórico:\n{{$json.memoryMessages}}\n\nPergunta atual: {{$json.input}}”
},
“type”: “@n8n/n8n-nodes-langchain.chainLlm”,
“typeVersion”: 1.6,
“position”: [
-480,
940
],
“id”: “11ea61bf-7dab-4249-ba3c-d850b3162f4f”,
“name”: “Processamento de Prompt”
},
{
“parameters”: {
“mode”: “insert”
},
“type”: “@n8n/n8n-nodes-langchain.memoryManager”,
“typeVersion”: 1.1,
“position”: [
-240,
940
],
“id”: “a3dc3e69-1640-4a49-9024-773676a73c1b”,
“name”: “Atualizar Memória”
},
{
“parameters”: {
“content”: “=## Resposta do Chatbot\n\n{{$json.output}}”,
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 1,
“position”: [
0,
940
],
“id”: “format-response”,
“name”: “Formatar Resposta”
},
{
“parameters”: {},
“type”: “n8n-nodes-base.errorTrigger”,
“typeVersion”: 1,
“position”: [
-1380,
1140
],
“id”: “error-handler”,
“name”: “Tratamento de Erros”
},
{
“parameters”: {
“content”: “=## Erro no Processamento\n\nOcorreu um erro ao processar sua solicitação: {{$json.error.message}}”,
“options”: {}
},
“type”: “n8n-nodes-base.set”,
“typeVersion”: 1,
“position”: [
-1120,
1140
],
“id”: “format-error”,
“name”: “Formatar Erro”
}
],
“connections”: {
“Quando a mensagem de bate-papo foi recebida”: {
“main”: [
[
{
“node”: “Memória de Conversa”,
“type”: “main”,
“index”: 0
}
]
]
},
“Memória de Conversa”: {
“main”: [
[
{
“node”: “Modelo de bate-papo OpenAI”,
“type”: “main”,
“index”: 0
}
]
]
},
“Modelo de bate-papo OpenAI”: {
“main”: [
[
{
“node”: “Log de Depuração”,
“type”: “main”,
“index”: 0
}
]
]
},
“Log de Depuração”: {
“main”: [
[
{
“node”: “Processamento de Prompt”,
“type”: “main”,
“index”: 0
}
]
]
},
“Processamento de Prompt”: {
“main”: [
[
{
“node”: “Atualizar Memória”,
“type”: “main”,
“index”: 0
}
]
]
},
“Atualizar Memória”: {
“main”: [
[
{
“node”: “Formatar Resposta”,
“type”: “main”,
“index”: 0
}
]
]
},
“Tratamento de Erros”: {
“main”: [
[
{
“node”: “Formatar Erro”,
“type”: “main”,
“index”: 0
}
]
]
}
}
}

If this solution does not correct your current situation, I strongly recommend that you make a backup of your old workflow, and create a new workflow following the guidelines of your need.

I hope I have helped in some way.

Big hug

HI Thank you again. your effort is greatly appreciated as you are the only one who helped me. Again thank you so much

1 Like

I am very happy with your recognition and it motivates me to continue studying and being useful to people. Could you please mark my previous post as the solution (blue box with check mark) so that this ongoing discussion does not distract others who want to find the answer to the original question? Thank you.