Hello.
I have enabled the Responses API toggle in the OpenAI node expecting it would automatically keep track of the conversation context, but every request seem to be processed on its own regardless.
The node documentation is very opaque on how conversation_id is meant to be handled. After poking around, I suspect I am expected to request the id from the OpenAI API first, and manage the conversation context by hand. Can anybody confirm before I go in and start building the thing?
Can I see what requests the node actually performs? It certainly would be useful in setting up flows for complex APIs, but I can’t seem to find any way to.
Hey @Rpahut, welcome! Good question - the Responses API toggle in the OpenAI node is a bit confusing at first.
Your suspicion is correct: you do need to manage the previous_response_id yourself. The toggle doesn’t auto-chain conversations behind the scenes. Here’s roughly how it works:
- Make the first request normally
- From the response, grab the
id field (that’s your response_id)
- On the next request, pass that
id as previous_response_id in the additional parameters
- Store it somewhere between executions (a Google Sheets cell, a database, or even a static data node) if your conversation spans multiple workflow runs
For the request inspection question - you can’t see raw requests directly in the node UI, but you can use a proxy like ngrok + a local HTTP inspector, or just check the OpenAI API logs in your dashboard. Another option is to use the HTTP Request node instead of the OpenAI node - gives you full visibility and control.
Hope that helps! Let me know if you run into more issues.
@Rpahut
Conversation ID: The node handles it automatically. Leave it blank for the first message, capture the returned conversation_id from the output, then pass it back into the node’s “Conversation ID” field for subsequent messages. You don’t need to request it manually from the API.
Inspecting requests: The native node doesn’t show raw API calls. Swap it for an HTTP Request node (POST to https://api.openai.com/v1/responses) to see exactly what’s being sent and received. Once you understand the payload, you can switch back if you prefer.
Note: Feed the ID back in yourself. Use HTTP Request node to debug.
That sounds alot like what AIs been telling me for the past couple days
.
You’re not wrong @Rpahut I’ll test the actual solution and get back
Turns out it is a separate set of endpoints for managing conversations, in the Responses API, and there is a matching set of nodes in the n8n. Kind of odd it is not integrated or automated even though the API switch is there, but once you know that creating and referencing a conversation is doable.
Specifically, I have enabled the Conversation ID option in the OpenAI Chat Model node, and am putting the id into it. The id is also stored in a Data Table so I can later restore it from my internal user id. I also ended up using the Http Request node for creating the conversation as the OAI API implementation Im communicating with is way too picky and I couldn’t configure the special one to work with it.