Why does “Continue (using error output)” on OpenAI v2 not pass the full error body to the next node?

Describe the problem/error/question

I’m using the OpenAI v2 node with On Error → Continue (using error output).
In the execution UI I can see the detailed error under Error details → From OpenAI as:

{
  "errorMessage": "Bad request - please check your parameters",
  "errorDescription": "Your input exceeds the context window of this model. Please adjust your input and try again.",
  "errorDetails": {
    "rawErrorMessage": [
      "400 - {\"error\":{\"message\":\"Your input exceeds the context window of this model. Please adjust your input and try again.\",\"type\":\"invalid_request_error\",\"param\":\"input\",\"code\":\"context_length_exceeded\"}}"
    ],
    "httpCode": "400"
  }
}

However, what actually reaches the next node as {{ $json }} in my workflow is only:

{
  "error": "Bad request - please check your parameters"
}

There are no other fields like errorMessage, errorDescription, errorDetails, or the original OpenAI error object. This means I can only do:

{{ $json.error === 'Bad request - please check your parameters' }}

but I cannot check for the OpenAI error code (for example context_length_exceeded), because that information is not present in the item that is passed on.

I would like to have the full OpenAI error body (including the code field) available in the error item that is sent to the next node, so that I can do something like:

{{ $json.errorDetails.rawErrorMessage.code === 'context_length_exceeded' }}

Is there a way to have the full OpenAI error body (including the code field) available in $json of the error item, instead of just a single error string?

Share the output returned by the last node

{
  "errorMessage": "Bad request - please check your parameters",
  "errorDescription": "Your input exceeds the context window of this model. Please adjust your input and try again.",
  "errorDetails": {
    "rawErrorMessage": [
      "400 - {\"error\":{\"message\":\"Your input exceeds the context window of this model. Please adjust your input and try again.\",\"type\":\"invalid_request_error\",\"param\":\"input\",\"code\":\"context_length_exceeded\"}}"
    ],
    "httpCode": "400"
  },
  "n8nDetails": {
    "nodeName": "Text1",
    "nodeType": "@n8n/n8n-nodes-langchain.openAi",
    "nodeVersion": 2,
    "resource": "text",
    "operation": "response",
    "itemIndex": 0,
    "time": "17-11-2025, 11:21:32",
    "n8nVersion": "1.119.2 (Self Hosted)",
    "binaryDataMode": "default",
    "stackTrace": [
      "NodeApiError: Bad request - please check your parameters",
      "    at ExecuteContext.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/node-execution-context/utils/request-helper-functions.ts:1498:10)",
      "    at processTicksAndRejections (node:internal/process/task_queues:105:5)",
      "    at ExecuteContext.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/node-execution-context/utils/request-helper-functions.ts:1798:11)",
      "    at ExecuteContext.apiRequest (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_ec7fbe0da3d2dc5c86e61be805f9ba74/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/OpenAi/transport/index.ts:56:9)",
      "    at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_ec7fbe0da3d2dc5c86e61be805f9ba74/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/OpenAi/v2/actions/text/response.operation.ts:607:18)",
      "    at ExecuteContext.router (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_ec7fbe0da3d2dc5c86e61be805f9ba74/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/OpenAi/v2/actions/router.ts:58:25)",
      "    at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_ec7fbe0da3d2dc5c86e61be805f9ba74/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/OpenAi/v2/OpenAiV2.node.ts:89:10)",
      "    at WorkflowExecute.executeNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1093:8)",
      "    at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1274:11)",
      "    at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1708:27"
    ]
  }
}

Information on your n8n setup

  • n8n version: 1.119.2 (Self Hosted)
  • Running n8n via: Docker

Hello @Jelle

In this case, you’ll probably want to use On Error → Continue, then use an IF/Switch node to check whether an error occurred and parse the complete error message afterward.

Hello @mohamed3nan,

On Error → Continue or On Error → Continue (using error output) gives the same minimal output:

[
  {
    "error": "Bad request - please check your parameters"
  }
]

Hi @Jelle

Yes indeed, I thought this would work until I tested it myself.

So as far as I know, there are two possible solutions I’m thinking of right now:

1. Use the HTTP Request node to call the OpenAI API, enable the “Never Error” option, then parse the response as described here:

2. Continue using the OpenAI node but use it in a sub-workflow, and set its On Error setting to “Stop Workflow”:

Then, in the main workflow, configure the sub-workflow’s setting On Error > Continue (using error output), then get the execution ID of the sub-workflow, use it to get execution information, and extract the full error object so that you can parse it as desired.

here is an example:

I know it seems like a complicated workaround, but this is what came to mind until there is a simpler built-in feature for this..