Support for OpenAI’s new /v1/responses endpoint in n8n

Support for OpenAI’s new

/v1/responses

endpoint in n8n

OpenAI has introduced a new Responses API (/v1/responses) that unifies chat and assistant capabilities and allows direct file inputs (e.g. PDF) to the model . In practice, you first upload a file (PDF, image, etc.) with the Files API using purpose=“user_data”, then call /v1/responses with the model and an input that includes an input_file reference and prompt text. For example, OpenAI’s documentation shows:

curl https://api.openai.com/v1/files \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -F purpose="user_data" \
  -F file="@document.pdf"

curl https://api.openai.com/v1/responses \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-4.1",
    "input": [
      {
        "role": "user",
        "content": [
          { "type": "input_file", "file_id": "file-XXXXXXXXXXXXXXXX" },
          { "type": "input_text", "text": "Summarize the content of this document." }
        ]
      }
    ]
}'

This example (adapted from OpenAI’s PDF guide) shows uploading document.pdf with purpose=“user_data” and then querying it via /v1/responses . Notably, the docs recommend using the user_data purpose for files that will be fed as model inputs , since it tells OpenAI that the file is user-provided data to reference (rather than a fine-tuning file or assistant config). The new Responses endpoint handles all the “plumbing” (chat turns, image/text fusion, etc.) so you only need one API call to ask questions of the file’s content .

Limitations of n8n’s current OpenAI integration

The built‑in OpenAI File node in n8n does not yet support the user_data purpose, nor is there an operation for /v1/responses. The official n8n docs (as of mid-2025) show the file operations have a Purpose dropdown with only Assistants or Fine-Tune as options . For example, the Upload-a-File operation states: “Use Assistants for files associated with Assistants and Message operations. Use Fine-Tune for Fine-tuning” . There is no mention of user_data, so you cannot currently select it in n8n’s UI. Likewise, there is no “Response” or “Create Response” action in the OpenAI node that would call the new /v1/responses endpoint.

In practice, this means n8n users must work around the limitation. Some have resorted to using an HTTP Request node manually pointed at https://api.openai.com/v1/responses. For instance, one user reported needing to hand‑configure the HTTP headers (Authorization: Bearer ) and JSON body to call /v1/responses from n8n, which is cumbersome . (Setting up the Bearer token auth properly was non‑trivial, as shown in the community forum discussion .)

Proposed changes to n8n’s OpenAI nodes

To fully support the new workflow, n8n should be updated in two ways:

  • Add user_data to File node purposes. In the OpenAi.node.ts (or corresponding file operations code), include user_data as a valid Purpose value for uploading. This aligns with OpenAI’s docs, which explicitly say to use purpose=“user_data” when the file is going to be input to the model . With this change, users could upload PDFs (or other documents) via n8n’s OpenAI File node and mark them as user_data, just as they do for assistants or fine-tune files.
  • Create a “Create Response” operation for /v1/responses. The OpenAI node should gain a new operation (perhaps under a “Response” resource) that issues POST /v1/responses. This operation would take input fields such as:
    • model – e.g. gpt-4.1 or gpt-4o-mini, a model that supports vision (required for PDF).
    • file_id – the ID of a previously uploaded file (via user_data) to reference.
    • prompt (or similar) – the user’s question or instruction about the file.
    • max_tokens, temperature, etc. – any standard completion parameters.The node’s logic would then assemble the JSON input array as required by the API (combining an input_fileelement and an input_text element in a single user message). For example, it would build something like:
{
  "model": "gpt-4.1",
  "input": [
    {
      "role": "user",
      "content": [
        { "type": "input_file",  "file_id": "<uploaded-file-id>" },
        { "type": "input_text",  "text": "<user prompt>" }
      ]
    }
  ]
}
  • Then it would call POST /v1/responses with that body. This mirrors the examples in OpenAI’s documentation . The node should also handle the response JSON (returning the generated text, etc.) back into the workflow data.

Implementing these changes would bring n8n’s OpenAI integration up to date with OpenAI’s latest API. n8n would then directly support common “document Q&A” tasks like PDF summarization or chat over documents, without requiring a complex multi-step assistant setup. As one community member noted, the new /v1/responses endpoint is extremely powerful and simplifies workflows by merging file input and chat into one call .

How to use

/v1/responses

in n8n (workaround)

Until native support is added, you can still use the /v1/responses endpoint by using n8n’s HTTP Request node with manual settings:

  • Set the URL to https://api.openai.com/v1/responses.
  • Under Authentication, configure a Bearer Token with your OpenAI API key (or manually add a header Authorization: Bearer YOUR_API_KEY ).
  • In the Body (raw JSON), include the fields model and input as shown above . For example:
{
  "model": "gpt-4.1",
  "input": [
    {
      "role": "user",
      "content": [
        { "type": "input_file", "file_id": "{{ $json.file_id }}" },
        { "type": "input_text", "text": "{{ $json.question }}" }
      ]
    }
  ]
}
  • (Here, $json.file_id and $json.question could be data from earlier nodes – e.g. the file upload result and the prompt text.)

However, using the HTTP node requires care to format headers correctly. As noted in the n8n community, you must use Authorization as the header name (not just “Authentication”) and include the word “Bearer ” before the key . Once configured, the HTTP node can fetch responses from the API. But a built-in responses operation would make this much simpler.

Conclusion

Adding user_data as a purpose and a /v1/responses operation in n8n’s OpenAI node would align n8n with OpenAI’s official PDF/files workflow . This would enable one-step document queries, rather than the multi-step assistant workarounds previously required. Given that OpenAI now provides direct support for PDF Q&A and even structured output via /v1/responses , updating n8n accordingly would greatly simplify automation tasks like summarizing or chatting with PDF content.

Sources: OpenAI documentation and examples (PDF guide for file inputs and responses); n8n official docs (File node options) ; and n8n community discussions on using /v1/responses , which together confirm the above descriptions.