Node error not detected

Describe the problem/error/question

I’m trying to intercept an error thrown by a node. I selected the option “Continue (using error output)”, and I also tried to read the error property inside an IF node, but despite the error thrown by the redis node (server was simply not running) I can’t intercept and recognize the error.

I don’t know if it’s a bug or an expected workflow, but I’d expect the redis node to either:

  • Take the error branch on failure
  • pass an error property on the successful branch, containing the relative message

Please share your workflow

In the above workflow, the flow always reaches the OK node despite the error thrown by the redis node.

Information on your n8n setup

  • n8n version: 1.29.1
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Linux (Synology NAS)

Can you share what the input json is going into the if node? You should be able to see in the execution history.

Is your redis node set to always output data in the node settings?

First thing I thought is your if node is set to pass if {{ $input.error }} is empty under the string comparisons. I’m a couple weeks behind 1.29.1 but I don’t even have that option. I would use the does not exist string comparison so it goes to false when {{ $input.error }} does actually exist

Hi Liam, that’s the point, it seems I don’t have an input on the if node. Or better, it seems that $input is just an empty object, no error or anything else, so I don’t have a string/object I can apply comparison to.

Got it, just reproduced. Definitely is a bug

I checked it out on github and the redis node is missing the code used for continuing on error. So it isn’t passing any of the data but it is still continuing.

I submitted a GitHub issue. This should be really easy to fix so I think they would have it out by the next patch or maybe the next one.

Workaround

For now as a workaround what you can do is use an Error Trigger in another workflow and then just add an execute workflow trigger right after the redis node. To make that work you will need to set “On Error” to Stop Workflow. You will also need to go into the workflow settings and set the “Error Workflow:” to the error workflow you want to run.

See below for an example

Error workflow

Main Workflow

Then the execute workflow node will give you a nice error object just like this:

{
   "execution":{
     ...
      "error":{
         "errno":-111,
         "code":"ECONNREFUSED",
         "syscall":"connect",
         "address":"127.0.0.1",
         "port":6379,
         "message":"Redis connection to localhost:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
         "stack":"Error: connect ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1555:16)"
      },
      "lastNodeExecuted":"Redis",
      "mode":"webhook"
   },
   "workflow":{
      ...
   }
}

So you can use {{ $json.execution.error }} to access the error info for an if statement or whatever.

Or you can just handle the error in the other workflow before you call back on the main workflow

Hope that helps!

1 Like

Thank you for reporting the bug, and thanks for the workaround, I was going to try with the Workflow Error trigger but wanted to be sure the steps for catching the node errors were right! :grinning:

It may be worth noting for future issues that creating duplicate issues here and on GitHub is more work than just keeping it to one place, The internal dev ticket for this NODE-1177 we will pop a note on here once we have implemented this feature.

1 Like

Got it, @Jon! I know for next time

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.