OPEN AI Node Requires a Thread ID even using a memory connector

I’m using the n8n Assistant OpenAI node in a chatbot flow.

To handle conversational memory, I connected a Window Buffer Memory node (memoryBufferWindow) with a custom sessionKey based on the user’s contact info (e.g., WhatsApp number + platform ID). I then connected its ai_memory output to the Assistant node.

My expectation was that the Assistant node would use the local memory and not require any OpenAI thread_id, since I’m trying to manage conversation context entirely within n8n — not in the OpenAI cloud.


Issue:

Despite having memory connected, the Assistant node throws this error when triggered:

csharp

CopyEdit

400 Invalid 'thread_id': 'undefined'. Expected an ID that begins with 'thread'.

This seems to indicate that the node requires a valid thread_id no matter what, even when memory is already being handled via the built-in buffer.



My questions:

  1. Is this expected behavior or a possible bug?
  2. Is there a way to use the Assistant node without thread_id, relying only on local memory?
  3. If not, should I switch to the standard OpenAI Chat node (n8n-nodes-base.openAi.chat), which works perfectly with Window Buffer Memory and doesn’t depend on threads?

Note:
Avoiding thread_id is intentional in my case, since I want full control over conversation state within n8n, without relying on OpenAI’s thread storage.

I can share a JSON export of the workflow if needed.

Thanks in advance!


I have the same problem. I don’t know how to solve it.

Actually, n8n had been updated and this bug coming. The only way to solve it is using a thread_id as memory.

Should we wait for the error to be fixed? Because I have to use mongodb chat memory.

im using thread_id now

Did you leave the thread_id blank? When I fill it in, it doesn’t accept it.

Hi, you can put undefined or before “Message Assistant” node, send a post request “https://api.openai.com/v1/threads” then OpenAI will going to create a thread_id. you can send thread_id generated by OpenAI instead of undefined.

I had the same problem and I did something like this.

if you need to save the data in your own database, will be necesary add extra node and save the most important information there, but if it doesn’t necesary, just sends the thread_id and OpenAI will be in charge of data.

PD: excuse my english, l’m from latam jejeje

2 Likes

Same problem here. This workaround just doesn`t make sense when there is a memory connector that role is doing it in core… :frowning:

1 Like

Happen the same to me. Even if the workflow was working perfectly before. So or is some new change on OpenIA assistants or n8n Nodes.

My solution was just put the thread id option and leave it empty but because I don’t need “memory”.

Hi guys I am having the same problem. When I used Simple Memory, the thread ID field is not shown and I cant define it. But when I switch to thread ID, I can not add a bre built memory. Really not sure what to do spend several hours on this now. Appreciate your help

solved it! Spent hours on research and with the help of Cursor and Gemini2.5 I re-created my workflow and now my agent has memory again. Find attached an image of what the workflow looks like. Also: I had to implement a script into my frontend, which stores and sends a thread ID.

Here is what I did on my frontend (website with chatwindow):

1. Thread ID Storage Variable

// A variable to hold the thread ID. This acts as the "memory" on the browser.
// It starts as null because we don't have a thread ID yet.
let currentThreadId = null;

2. Sending Thread ID with Requests

// Prepare the data to send.
const bodyData = {
    message: message
};

// THIS IS THE CRITICAL LOGIC FOR FOLLOW-UP MESSAGES
// If we have stored a thread ID from a previous message, add it to the request.
if (currentThreadId) {
    bodyData.thread_id = currentThreadId;
}

3. Storing Thread ID from Response

const data = await response.json();

// THIS IS THE MOST IMPORTANT STEP
// After getting a response from n8n, check if it contains a thread_id
// and save it in our variable for the next message.
if (data.thread_id) {
    currentThreadId = data.thread_id;
}

How it works:

  1. Initial state: currentThreadId starts as null
  2. First message: Only sends the message (no thread_id)
  3. Response handling: If the n8n response contains a thread_id, it’s stored in currentThreadId
  4. Follow-up messages: All subsequent messages include the stored thread_id in the request body
  5. Thread continuity: This allows the conversation to maintain context across multiple messages

The key insight is that the thread ID acts as a “memory” on the client side, persisting the conversation context between requests to your n8n webhook.```

hope this helps!

1 Like

Happen the same to me. Even if the workflow was working perfectly before. So or is some new change on OpenIA assistants or n8n Nodes.I need to use simple memory also.So, solution is working till now.
Please anyone help me.

This is the fastest way I found to solve the problem:

Using Redis as thread memory, I did the following.
If “get thread_id” (for instance, using lead’s phone number) returns null, the assistant creates one. The checkpoint after assistant decides if thread_id is new (checking the answer of “get thread_id”) to store it, if necessary.
If you have any doubt, just let me know.