OpenAi Assistant chat with Thread

Hey, so I’ve got this workflow where questions from a chat are sent to my OpenAI assistant, which then does a search in Pinecone for some data, etc. That’s all good, but I’m having an issue with the HTTP requests to OpenAI.

I wanted it to always use the same thread to keep the conversation context for each lead that sends stuff in the chat, but it seems like it’s mixing up the conversations. The JSON that arrives at the first OpenAI node doesn’t have a sessionId—it’s a post from my backend, and it doesn’t generate this sessionId. Could that be the problem? Do I need both the threadId and the sessionId? I don’t get the difference. Can someone shed some light on this?

2 Likes

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
1 Like

I’m dealing the same issue, did you manage to solve it in some way? did you found something to improve the threads managing?

Also want to figure this out. Any progress.

Also curious why you are using a pinecone vector store instead of handling that through openai directly. If you log into their platform you can upload documents there directly. I’ve found the results to be pretty good for this.

1 Like

Initially, I addressed the issue by having my backend assign a unique ID to each phone, allowing each client to be identified by this ID and maintaining the conversation context using a memory buffer. Over time, n8n updates introduced the automatic inclusion of session_id, enabling me to store chat histories both in the memory buffer at the time and now with a PostgreSQL node. This enhancement has been excellent.

1 Like