Hi all! new user of n8n! .
Today I was setting up my first workflow which included downloading an image from Nextcloud and sending it to another service via Http (post binary data).
I kept getting the following error from the UI when I was running the workflow:
Workflow execution process did crash for an unknown reason!
After trying multiple things I found that the memory was spiking and after the workflow failed the memory was released, I tried allocating more memory to the container (from 512M to 2048M).
After the increase of memory the workflow finally finished.
Now I am intrigued by this spike in memory, my file is a 5Mb image, but n8n goes up to 1.1 Gb.
Any ideas why?
What is the error message (if any)?
Workflow execution process did crash for an unknown reason!
Please share the workflow
Share the output returned by the last node
Information on your n8n setup
n8n version: 0.158.0
Database you’re using (default: SQLite): SQLite
Running n8n with the execution process [own(default), main]:
Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker
Hi @jcalonso, first of all, welcome to the community
n8n would by default keep all binary data in memory which can lead to the behavior you have seen I am afraid (there’s a couple of threads on the forum as well around this).
We have recently released another approach to handling binary files which would use your file system instead. This is currently being tested, but you can already enable it by setting the following environment variable on n8n versions since 0.157.1:
Hi @MutedJam Thank you for your answer. I followed you recommendation, and enabled N8N_DEFAULT_BINARY_DATA_MODE env var, but didn’t see any difference (running n8n v0.158.0 )
And actually more than the memory spike issue, is the time it takes to load the image into memory to then call my HTTP service, around ~40 seconds to 1 minute for the same 5mb image.
I actually ran into a similar problem after sharing my suggestion here. I couldn’t sort this out myself so have asked @kik00 for help on this one and will, of course, report back as soon as I have any feedback on this.
Hi @jcalonso we tested a bit further in the meantime and it seems memory consumption during considerably lower when uploading files when setting the N8N_USE_DEPRECATED_REQUEST_LIB environment variable to true, suggesting this might be a problem coming down to our migration to axios.
While we look into this, you might also want to try setting N8N_USE_DEPRECATED_REQUEST_LIB=true on your end.
Tbh, I don’t know enough about axios to judge whether this is a problem with axios in general or specifically with our implementation. I have documented this in our internal issue tracker and assigned it to our resident axios expert @krynble (sorry ) to take a closer look.
Apologies @jan didn’t see your reply before, did you mean EXECUTIONS_PROCESS=main?
I tried and I noticed an increase of speed in the overall workflow but the memory spike was the same high.
Then I tried adding @MutedJamN8N_USE_DEPRECATED_REQUEST_LIB=true, I think this gives the best results, also the memory was abit lower than in the previous test with only N8N_USE_DEPRECATED_REQUEST_LIB=true spiking at 245Mb only:
For now I will leave it like this, understanding the pros and cons of each of the executions process types.
Hi @jcalonso, just wanted to leave a quick note to let you know that @bearbobs managed to track down the issue to the debug logging introduced with axios.
A temporary fix has been released with n8n version 0.159.1:
So you should be able to switch back to axios again after upgrading.
Hi @MutedJam I’m sorry to bring this topic back, but I’m having the exact same problem with the docker-compose version.
What I don’t understand is that I’m using version 0.198.2.
I’m not able to make it run even by setting these env variables: