Describe the problem/error/question
I’m using the OpenAI v2 node with On Error → Continue (using error output).
In the execution UI I can see the detailed error under Error details → From OpenAI as:
{
"errorMessage": "Bad request - please check your parameters",
"errorDescription": "Your input exceeds the context window of this model. Please adjust your input and try again.",
"errorDetails": {
"rawErrorMessage": [
"400 - {\"error\":{\"message\":\"Your input exceeds the context window of this model. Please adjust your input and try again.\",\"type\":\"invalid_request_error\",\"param\":\"input\",\"code\":\"context_length_exceeded\"}}"
],
"httpCode": "400"
}
}
However, what actually reaches the next node as {{ $json }} in my workflow is only:
{
"error": "Bad request - please check your parameters"
}
There are no other fields like errorMessage, errorDescription, errorDetails, or the original OpenAI error object. This means I can only do:
{{ $json.error === 'Bad request - please check your parameters' }}
but I cannot check for the OpenAI error code (for example context_length_exceeded), because that information is not present in the item that is passed on.
I would like to have the full OpenAI error body (including the code field) available in the error item that is sent to the next node, so that I can do something like:
{{ $json.errorDetails.rawErrorMessage.code === 'context_length_exceeded' }}
Is there a way to have the full OpenAI error body (including the code field) available in $json of the error item, instead of just a single error string?
Share the output returned by the last node
{
"errorMessage": "Bad request - please check your parameters",
"errorDescription": "Your input exceeds the context window of this model. Please adjust your input and try again.",
"errorDetails": {
"rawErrorMessage": [
"400 - {\"error\":{\"message\":\"Your input exceeds the context window of this model. Please adjust your input and try again.\",\"type\":\"invalid_request_error\",\"param\":\"input\",\"code\":\"context_length_exceeded\"}}"
],
"httpCode": "400"
},
"n8nDetails": {
"nodeName": "Text1",
"nodeType": "@n8n/n8n-nodes-langchain.openAi",
"nodeVersion": 2,
"resource": "text",
"operation": "response",
"itemIndex": 0,
"time": "17-11-2025, 11:21:32",
"n8nVersion": "1.119.2 (Self Hosted)",
"binaryDataMode": "default",
"stackTrace": [
"NodeApiError: Bad request - please check your parameters",
" at ExecuteContext.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/node-execution-context/utils/request-helper-functions.ts:1498:10)",
" at processTicksAndRejections (node:internal/process/task_queues:105:5)",
" at ExecuteContext.requestWithAuthentication (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/node-execution-context/utils/request-helper-functions.ts:1798:11)",
" at ExecuteContext.apiRequest (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_ec7fbe0da3d2dc5c86e61be805f9ba74/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/OpenAi/transport/index.ts:56:9)",
" at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_ec7fbe0da3d2dc5c86e61be805f9ba74/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/OpenAi/v2/actions/text/response.operation.ts:607:18)",
" at ExecuteContext.router (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_ec7fbe0da3d2dc5c86e61be805f9ba74/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/OpenAi/v2/actions/router.ts:58:25)",
" at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/@n8n+n8n-nodes-langchain@file+packages+@n8n+nodes-langchain_ec7fbe0da3d2dc5c86e61be805f9ba74/node_modules/@n8n/n8n-nodes-langchain/nodes/vendors/OpenAi/v2/OpenAiV2.node.ts:89:10)",
" at WorkflowExecute.executeNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1093:8)",
" at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1274:11)",
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@[email protected]_08b575bec2313d5d8a4cc75358971443/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1708:27"
]
}
}
Information on your n8n setup
- n8n version: 1.119.2 (Self Hosted)
- Running n8n via: Docker
