Execution history entry shown in error status when a node receives a large amount of data

I have a couple of workflows that have HTTP nodes that fetch flat files around 300-400kb in size. These nodes always show up with a warning when opened, that the data is too large to be displayed. But, the workflows themselves execute successfully and all further nodes in the flow are able to process the data correctly.

image

However, in the execution history, this workflow always shows up with an error status. On further investigation, I realized that if I removed the node that fetches this large flat file, the error status goes away. I understand that this may be intentionally coded to ensure that the execution history does not blow up in size storing all the large files that are fetched, but it would be much better if the status of the workflow accurately reflected if the workflow succeeded or not. Marking the workflow with an error status seems wrong, since the workflow did execute correctly, despite one of the nodes fetching a large amount of data.

Is that something that can be fixed? I don’t expect the execution history to store the data fetched (Although it would be nice to leave that choice to the user). Just being able to see the accurare outcome of the executed workflow would be nice.

Hello @ajayjohn

Thank you for reporting this issue. That is not the expected behavior really. When the execution succeeds, we want it to display as success.

Can I ask you for some more information please? If you open the execution does it show that all nodes have run correctly? Do they display data?

Also is it an issue with recent versions or is it something you have been expreiencing for a while?

This information can help us understand what might be the problem. In any case, I am already starting investigating this issue.

Thanks!

Of course!

When I open the execution, none of the nodes have the green execution badge on them. None of them show any data either. However, in reality, all of them have executed fine and I can see all the changes accurately reflected in the third-party systems I am dealing with.

As for your second question, I just created this workflow about a week back. So I am not sure if this is an issue introduced by one of the recent releases. All I can confirm is that all the remaining workflows I have continue to get executed successfully and show up fine in the execution history. I even tried exporting this particular workflow and creating it again assuming something was messed up in the workflow meta-data, but that didn’t fix the issue either.

Hi @ajayjohn

Thank you for your feedback. I was still unable to properly simulate this issue as all my tests, even with big volumes of data were always saving correctly.

Can you provide me with a reproducible example? If you workflows does not deal with private data, feel free to share the one you mentioned.

You can select all nodes and press CTRL + C to copy the workflow and paste it here.

You are right, I noticed the issue on some of my other nodes too. So it probably has nothing to do with the large volume fetched by the HTTP node. I apologize for sending you down a rabbit hole :frowning:

Let me try to share a simpler version of my flow where I face the same issues.

Hi @ajayjohn

I am investigating some similar issues and found a problem when using MySQL as the database.

Is this your case? Did you change any of n8n’s default settings?

This can help us track down the problem =)

Thanks!

Oh, that’s a good lead!
I am using MariaDB as the database. However, I haven’t changed any of n8n’s default settings.
Is there anything I can fix on the database side to resolve this issue?

The issue got already fixed by @krynble. The fix will be released with the next version today or tomorrow. As soon as you start it, it will make the required database change.

Got released with [email protected]

1 Like