How to handle IA agent tools errors

Hi community ! Thanks by advance for your help !

Describe the problem/error/question

It is about error handling in a IA agent when one of the tools (Here HTTP Request tool) is not behaving as expected. I made a very simple workflow to illustrate the situation (See below)

I’m using a IA agent and it has an HTTP Request tool.

When the tool is used by the agent and the request fails (403 Unauthorized - 404 Not Found or 500 Server error), the workflow stops. If you look at the execution, the error is clear, if you click on the tool, the error can be read in the output section.

My problem is that I have found no way to continue the execution…
Since it is a workflow initiated by a user via a webhook, I would like to be able at least to respond with a technical error message. In my case the response to the client calling the workflow is HTTP 200 and the body response is empty.

Here is what I tried:

  • In the settings of the IA agent, configure “On error” : “Continue” or “Continue with error output” → The workflow still stops abruptly
  • In the workflow settings, configure an “Error Workflow” → It is never executed.
  • Add to the “options - System Message” of the IA agent guidance to handle errors → The workflow still stops abruptly

It should be doable, I must be missing something…
A bad solution would be to rewrite the service the tool is calling to always answer HTTP 200 and provide an error code and an error message but it is ugly and it will not cover all the situations (service down for example).

I looked over the internet, through the documentation and asked Mistral about it and I haven’t found a good answer. If anyone has an idea, I would happily try !

Thanks !

What is the error message (if any)?

The connection cannot be established, this usually occurs due to an incorrect host (domain) value

Please share your workflow

Share the output returned by the last node

{
“errorMessage”: “The connection cannot be established, this usually occurs due to an incorrect host (domain) value”,
“errorDetails”: {
“rawErrorMessage”: [
“getaddrinfo ENOTFOUND unknown_address_in_th_web.com”,
“getaddrinfo ENOTFOUND unknown_address_in_th_web.com”
],
“httpCode”: “ENOTFOUND”
},
“n8nDetails”: {
“nodeName”: “Find a friend”,
“nodeType”: “n8n-nodes-base.httpRequestTool”,
“nodeVersion”: 4.4,
“itemIndex”: 0,
“time”: “02/04/2026 00:35:18”,
“n8nVersion”: “2.13.3 (Cloud)”,
“binaryDataMode”: “filesystem”,
“stackTrace”: [
“NodeApiError: The connection cannot be established, this usually occurs due to an incorrect host (domain) value”,
" at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-nodes-base@file+packages+nodes-base_@[email protected]_asn1.js@5_8da18263ca0574b0db58d4fefd8173ce/node_modules/n8n-nodes-base/nodes/HttpRequest/V3/HttpRequestV3.node.ts:809:16)“,
" at processTicksAndRejections (node:internal/process/task_queues:103:5)”,
" at WorkflowExecute.executeNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1043:8)“,
" at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1222:11)”,
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:1668:27",
" at /usr/local/lib/node_modules/n8n/node_modules/.pnpm/n8n-core@file+packages+core_@[email protected]_@opentelemetry+exporter-trace-otlp_9f358c3eeaef0d2736f54ac9757ada43/node_modules/n8n-core/src/execution-engine/workflow-execute.ts:2313:11"
]
}
}

Information on your n8n setup

  • n8n version: 2.13.3
  • Database (default: SQLite): None
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Unknown
  • Running n8n via (Docker, npm, n8n cloud, desktop app): n8n cloud
  • Operating system: Windows 11

I also tried The option “Response → Never error” on the “HTTP Request tool” But it behaves the same …

You can try to instruct the AI Agent to ignore all 400, 403, 404, 500 error generated by the Find a friend tool.

Make sure your HTTP request tool contains a valid URL and is not behind any paywall

welcome to the n8n community @Pierre_Haderer
What I would recommend is moving that HTTP call outside the Agent, handling the failure in the main workflow with Continue On Fail, and only then passing the result into the Agent. That way, even DNS or connection failures become regular workflow data that you can return through Respond to Webhook.

This is actually a known bug wiht how AI Agent tool nodes handle errors — the “Continue on error” settings on the Agent dont catch errors thrown by sub-nodes like tools. The workaround is to wrap your HTTP call in a separate sub-workflow and use an “Execute Workflow Tool” instead of the HTTP Request Tool directly. That way the sub-workflow catches the error with Continue On Fail on the HTTP node and returns a clean response back to the agent instead of crashing everything.

So youd make a small workflow like this that your agent calls as a tool:

Then in your main workflow, replace the HTTP Request Tool with a “Call n8n Workflow Tool” node that points to that sub-workflow. The agent still gets to decide when to call the tool, but errors come back as data instead of blowing up the whole execution. There’s an open GitHub issue tracking this so hopefully it gets a proper fix soon.

Thanks for the suggestion @kjooleng, I tried your proposition with the following prompt:
”If the tool returns an HTTP error (403, 404, 500) Simply answer that there are no firend available.”
And I configured the URL of a real service which needs authentication to get the 403 Unauthorized.
The behaviour is the same, the workflow stops abruptly.
It did not work …

Thanks for the suggestion @tamy.santos But I cannot execute the HTTP request before the agent. In my specific situation the agent is handling a calendar and it has tools to get, create, update and delete events.
I need an agent to decide what to do and which request should be done.

Thanks @achamm for your ideas.
Yes, I thought about this workaround to manage the error, I also think it should work.
But I really was hoping for a better solution since I don’t think I’m doing crazy stuff.
As much as possible I would have liked to keep all my logic in a single workflow.
Also I haven’t tried sub-workflow tools and how to send them parameters constructed by an IA agent. I suppose It should work fine. I will try and share my results here.

If there is a known issue on that, maybe there is a issue ID that could be mentioned here. Do you have it ?

1 Like

Enable “Continue On Fail” in the HTTP Request Tool

if I am understanding your problem correctly, you can enable “continue on fail” on the HTTP request tool and when the error happen it will still flow but it will give a json output like”error”:{ “statuscode”:404,

You can add a if node to see the error happened and send a response to the user

or you can use a global error workflow where it can send you a message as response (I personally get an email)

Here is the to-the-point solution for your n8n AI Agent error handling issue:

The Solution: Node-Level Error Handling

The reason your workflow stops abruptly is that the AI Agent treats a Tool’s 400/500 error as a fatal execution stop. You need to handle the error inside the Tool node settings, not the Agent’s global settings.

1. Enable ‘Ignore Errors’ on the HTTP Tool

  • Open your HTTP Request Tool node.

  • Go to the Settings tab.

  • Toggle ‘Ignore Errors’ to On.

  • Why? This prevents the node from crashing the workflow. Instead, it passes the error data (403, 404, 500) as an output that the Agent can actually ‘read’ and process.

2. Update Agent System Instructions

Add this specific instruction to your AI Agent’s System Message:

‘If a tool returns an error status code or a failure message, do not stop execution. Instead, analyze the error and explain the technical issue to the user in the response.’

Why your previous attempts failed:

  • On Error (Continue): This applies to the Agent’s own logic, not the internal crash of a connected Tool.

  • Error Workflow: This only triggers if the entire execution fails globally, but Agents often ‘hang’ on tool errors before the global error handler can catch them.

Result: Now, instead of an empty HTTP 200 response, the Agent will catch the 403/500 error from the tool and describe it to the user.

Yeah the sub-workflow approach is the way to go for now, for passing parameters from the agent just set up input fields in the sub-workflow and the Execute Workflow Tool will let the agent fill them in automatically. Pretty painless once you set it up.

Hi, The option “Continue On Fail” in the HTTP Request Tool does not exist. Only on the HTTP request classic node. Sadly.

Hi, Thanks @Muhammad_Uzair_AI but there is no such option “Toggle ‘Ignore Errors’ to On.”

The only settings I have on this tool are :

  • “SSL Certificates” On/off
  • “Notes“ + (textarea) and “Display Note in Flow?” On/Off

I’m using “HTTP Request Tool node version 4.4 (Latest)”, maybe you have an older version ?

This one is actually not your fault, you just hit how AI agent works in n8n right now

The problem is not the HTTP request itself. It’s the agent.

When the tool fails (403, 404, ENOTFOUND, etc), the agent treats it as a hard error and stops everything. That’s why:

  • “continue on error” doesn’t work

  • error workflow doesn’t run

  • system message doesn’t help

because the error happens inside the agent, not like normal nodes


What you can do (best fix):

Don’t let the agent call the HTTP tool directly

Do it like this instead:

  1. Let the agent only decide what to do (no HTTP tool)

  2. Pass the data to a normal HTTP Request node

  3. Turn on “continue on fail”

  4. Use an IF node to check if there is an error

Now your workflow won’t stop, and you can still send a response back


For your webhook response:

At the end, add a response like this

  • if success → return data

  • if error → return message

Example:

  • success → { status: "ok" }

  • error → { status: "error", message: "service failed" }


small tip:

In n8n right now, it’s better to use
agent = thinking
workflow = execution + error handling

1 Like

Great explanation from @Blessing - that’s exactly the root cause.

One more approach I’ve used in production that complements this: use a Code node as your HTTP wrapper inside the agent tool instead of the native HTTP Request tool. This gives you full try/catch control.

Example Code node (JavaScript):

try {
  const response = await this.helpers.httpRequest({
    method: 'GET',
    url: $input.item.json.url,
  });
  return [{ json: { status: 'ok', data: response } }];
} catch (error) {
  return [{ json: { status: 'error', message: error.message, code: error.httpCode || 'UNKNOWN' } }];
}

The agent receives { status: 'error', message: '...' } instead of a hard stop, so it can reason about the failure and still send a graceful reply to the user.

I’ve been running this pattern in a FB Messenger chatbot in production for months - the agent handles API failures cleanly without ever leaving the user hanging. Works great for external APIs where you can’t control the uptime.

1 Like

Hi @nguyenthieutoan,

you’re right, if I would use code, I could handle the errors myself and avoid the abrupt stop.
But once again, it woud make it very diffuclt to configure the input of the request.
Especially to give a description of each input so that the agent can fill them with the appropriate value.

Edit:
The suggestion below could be a nice way to describe the input to the agent.
Thanks @nguyenthieutoan for the quick response.

Good point @Pierre_Haderer - the input description challenge is real, but there’s actually a clean way to handle it.

When using a Code node as the tool, the agent still passes parameters via the tool call. The trick is to describe the expected inputs clearly in the tool’s description field (the “Description” you set on the Code node tool). The agent reads that description to know what to pass.

Your Code node then receives them via $input.item.json:

// Tool description you set: "Fetch friend data. Input: { friendId: string, includeStatus: boolean }"
const { friendId, includeStatus } = $input.item.json;

try {
  const response = await this.helpers.httpRequest({
    method: 'GET',
    url: `https://your-api.com/friends/${friendId}`,
    qs: { includeStatus }
  });
  return [{ json: { status: 'ok', data: response } }];
} catch (error) {
  return [{ json: { status: 'error', message: error.message, code: error.httpCode || 'UNKNOWN' } }];
}

The agent is quite good at filling parameters correctly when the description is clear. I’ve been using this pattern for months in production - the key is writing a precise tool description so the LLM knows exactly what format to send.

Hope that helps resolve the input config concern!

I finished trying to use sub-workflow as tools and it is not an apropriate solution for this situation.
As for Code tools, I cannont provide a description for each input, the only way is to provide all the information for the agent in the main prompt and it becomes a bit messy.

BUT I think I finally found something that I’m satisfied with.
The solution I implemented was to :

  • Move the whole agent and all its tools into a sub-workflow
  • To use a node to call sub-workflow with the settings “On error” : “Continue”

This way :

  • I was able to keep the HTTP request tools with all the nice features they have to guide the IA agent.
  • I was able to catch the error during the agent execution in the main workflow

Thanks for your help everyone, it really helps to brainstorm on the subject and build from your suggestions. See you next time.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.