N8n Docker Container crashing

Hi there, I’ve go a workflow for parsing email message. For this I installed the extra package mailparser in my Docker container on macOS Monterey with M1 chip. Everything works fine when I’m parsing messages with relatively small attachments. But as soon as I parse a test message with a 3 MB pdf file, the n8n docker container stops approx. 10-15 seconds later with EXITED(UNDEFINED). Any idea what might be wrong and how to fix this. Thanks in advance.

Screenshot 2022-02-22 at 22.16.32

… some additional info … the log keep spitting out garbish …

That sounds like n8n is running out of memory. I would try to do the same again and keep an eye on the memory usage.

I changed Docker’s memory from 2 to 4 GB, created a new n8n container, added nodemailer. I started testing with some small files, but a soon as I sent my test with ~3 MB pdf file, garbish appeared in my screen again. This looks like the buffer data which I get from parsing a MIME message via simpleparser.

A few seconds later, Docker Desktop become unresponsive and shortly after that, my whole macOS started to slow down and then I got the following message:

Screenshot 2022-02-23 at 07.24.02

I did not get the change to force quit Docker, because my system shut down …

Screenshot 2022-02-23 at 07.27.12

I’m not an expert in this, but it looks like n8n is pushing the data that is passed through the workflow from node to node into the container’s console …

Anyhow, this is the first time that I got my MacMini M1 development machine to crash …

and some last minute info: after I startup my n8n container again, the garbish reappears. Looks like it going through some back log … I’m going to trash this container and create a new one …

Hi @dickhoning, I wonder if having these massive logs open in the Docker Desktop UI might contribute to the overall problem.

Could you avoid opening it and run docker stats in your terminal? This should give you a good view of the memory consumption for your docker containers without processing potentially huge logs:

On the problem itself: The gibberish looks like the contents of a file buffer. From our DM I understand that while you’re processing binary data it is actually coming through inside the JSON body of a webhook. While we have recently introduced a new approach to processing binary data and avoid keeping it in memory, this doesn’t apply to JSON data.

This means that your data is passed on from node to node and constantly kept in memory in the process (meaning more nodes = more memory being eaten up). There are other factors too (e.g. executing a workflow manually will drive up the memory consumption as data is kept available for the UI).

So to reduce memory consumption in this rather unusual scenario you might want to look at whether you can avoid processing the binary data as JSON data. When using the Webhook node, you can enable the Binary Data toggle:

Now when sending through a file as part of a multipart form request, it should appear in n8n as binary data rather than JSON data. Combined with the approach linked above (setting N8N_DEFAULT_BINARY_DATA_MODE=filesystem) this should significantly reduce the memory consumption.

You can still read the file in your custom code as needed, but keep it out of the JSON data. Here’s an example workflow showing the basic idea here:

Example Workflow

Hope this provided some pointers as to where to go next.

Thanks! I’m going to try this out later today. One things that I notice when testing with a small dataset …

Editor is now accessible via:

http://localhost:5678/

Failed saving execution data to DB on execution ID 7563

Failed saving execution data to DB on execution ID 7564

Failed saving execution data to DB on execution ID 7565

Failed saving execution data to DB on execution ID 7566

ERROR RESPONSE

Error: There was a problem executing the workflow.

    at Object.executeWebhook (/usr/local/lib/node_modules/n8n/dist/src/WebhookHelpers.js:367:30)

(node:9) UnhandledPromiseRejectionWarning: ResponseError: SQLITE_CORRUPT: database disk image is malformed

    at Object.executeWebhook (/usr/local/lib/node_modules/n8n/dist/src/WebhookHelpers.js:369:15)

(Use `node --trace-warnings ...` to show where the warning was created)

(node:9) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)

(node:9) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

ERROR RESPONSE

Error: There was a problem executing the workflow.

    at Object.executeWebhook (/usr/local/lib/node_modules/n8n/dist/src/WebhookHelpers.js:367:30)

(node:9) UnhandledPromiseRejectionWarning: ResponseError: SQLITE_CORRUPT: database disk image is malformed

    at Object.executeWebhook (/usr/local/lib/node_modules/n8n/dist/src/WebhookHelpers.js:369:15)

(node:9) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 2)

ERROR RESPONSE

Error: There was a problem executing the workflow.

Hey @dickhoning, this looks like your n8n database is corrupted, possibly because it was interrupted during a previous write action.

You could try to repair it, but I wouldn’t have any pointers for that unfortunately (though Google does return some indicators). if you’re simply testing, it’s much easier to simply delete your database file and have n8n create a new one.

Hi @MutedJam how do I delete the n8n database in a docker container? Or can I just delete the container and create a new one? Maybe that’s easier then.

Hi @dickhoning, this depends on whether you have used the --volume or -v option (when using docker run) or specified a volume under the volumes: section of your docker-compose file when using docker compose. If so, you’d need to delete the respective volume.

If you haven’t specified a volume, no data would be persisted and deleting your container would indeed delete all its data (and you can then simply re-create it).

Hi @MutedJam looks like I’ve assigned a volume (~/.n8n:/home/node/.n8n). One question before I go ahead a delete the database.sqllite; are my workflows and credentials stored in there? And if so, is it possible to first export these settings?

It looks like you have made your default n8n database the database used by your docker container. So definitely worth keeping that and not deleting it, good catch.

A very simple way for a fresh start would be to get rid of your docker container (this would leave the database in place), then re-create your container with another path (e.g. instead ~/.n8n before the : simply use a path to a different folder where a database for experimenting would then be created). This way you can mess around with the docker instance of n8n without affecting your main data.

Or for very short tests/experiments just don’t define a volume at all. Data would then only be stored while the container exists.

Oke, and uh … how do I easily export all the workflows and credentials from the current database?

This could be done using our CLI, for example by running n8n export:workflow --all --output=/home/tom/foo.json to export all your workflows into /home/tom/foo.json.

You could also run for example docker exec -it n8n-n8n-1 n8n export:workflow --all to execute the n8n export command inside your docker container (where n8n-n8n-1 is the name of your container, you might need to adjust it if you’re using a different name).

and how about credentials?

The commands would be explained on the CLI page linked in my last post in more detail. But the tl;dr for exporting readable credentials is n8n export:credentials --all --decrypted.

Hi @MutedJam thanks again for taking the time helping me. Really appreciated! And although my ‘Email Parsing’ project is taking way too much time, and whilst Murphy’s first law seems to apply on every little aspect, I’m actually learning a lot about n8n’s landscape, the ins and outs and do’s and dont’s.

I’m first going to clean up the mess I made on my development machine, and then I’m going to have another look at it. However, it’s not going to be easy to change this workflow. I’m not feeding in an object, but the raw text of a mail/mime message. Then simpleParser converts this to json and all the attachments are actually buffers. The next step it deciding whether there are any attachments and then the flow is split into branches that process message with and without attachments. Only then are the buffers converted to base64. I’m going to look if I can change the logic in order to process the attachments first, so that they do not get passed on from node to node. But I’m not sure whether this is going to be a viable solution.

So I might need to revert back to my initial plan, and that was to somehow (ab)use the IMAP node and convert it to something that would accept taking in a MIME message via a webhook.

And perhaps somebody else has a brilliant idea …

I managed to repair the corrupt database. First I created a backup via the CLI …

sqlite3 database.sqlite ".backup database.back"

Then replaced the corrupted database with the backup. I’m not sure whether to trust this ‘repaired’ database, but guess it’s o.k. for further testing purposes.

One thing I noticed, is that if you’re using the Desktop App as well as assign ~/.n8n:/home/node/.n8n to your Docker container, that you’re essentially using the same database. Now I’m just wondering whether this is a good idea or not.

I also took a closer look at the sqlite3 database and noticed that you store each request in the data field of the execution_entity table. This means that when I’m sending a 4 MB file to my workflow, my database size increases with 4 MB. Do you happen to have a setting to exclude the body being stored?

It also hit me that all my work(flows) and credentials are stored in this database, so I better start working out a proper backup schedule for this :sunglasses: … any suggestions and/or recommendations are of course very welcome!

1 Like

And here’s a link to a screen recording that shows you how a crash my n8n Docker container. I send a MIME message which contains a 3.3 MB pdf file. I get the structured json file back in a little over 2 seconds. You can see the container’s CPU go up, the go down, and shortly after go up again. And then the container crashes …

link to the crash video
Screenshot 2022-02-23 at 22.06.03

I am afraid I don’t have any great idea of how to improve the situation if the binary data would have to live inside your JSON data :frowning:

The easiest approach I can think of here would be store the emails in an inbox that is supported by our IMAP or Gmail nodes - this would save you from the trouble of having to parse it manually.

I have tried a few times but can’t get to that video to take a look.