N8n docker is down

Describe the problem/error/question

What is the error message (if any)?

A workflow was orchestrated to read 10,000 data items from a JSON file and insert them into MongoDB in batches of 100 items through a loop. During execution, it was found that the CPU usage of the n8n Docker container continued to rise after inserting 7,000 data items in batches, eventually exceeding 300%, and the container crashed.

Please share your workflow

image

image

Batch Size is 100

json input example is

[
{
“id”: “f2ecc731-5d31-4c4e-82aa-ddc88efb5254”,
“name”: “Christopher Reynolds”,
“age”: 68,
“gender”: “female”,
“email”: “[email protected]”,
“phone”: “(963)510-1110”
},
{
“id”: “bc27fe0f-2ea3-4b8d-966a-08c2540a3e68”,
“name”: “Edwin Williams”,
“age”: 49,
“gender”: “female”,
“email”: “[email protected]”,
“phone”: “500.561.6528x122”
},
{
“id”: “f4d7c257-b232-4fd1-bcb9-347672a68497”,
“name”: “Nicholas Randall”,
“age”: 89,
“gender”: “male”,
“email”: “[email protected]”,
“phone”: “001-444-801-0194x030”
}

]

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Try the following:

allow running external modules in code node with env var:

  - NODE_FUNCTION_ALLOW_EXTERNAL=*

then test the following flow:

In the code you will need to set some values:

The connection string/uri:
const uri = 'mongodb://<your_user>:<your_pass>@mongo:27017';

The collection:
const col = client.db('<your_db>').collection('<your_collection>');

I used this file to test the flow.

10000 records, 2MB.

This is how long it takes to upsert the records (23s, which is 434 records/s):

This is not as pretty, but it does use bulk writing, instead of writing records one by one.

Let me know if this works for you.

Why does my workflow experience high CPU usage? Is it not recommended to use MongoDB nodes for this scenario? I directly use MongoDB nodes to insert 10,000 data items, and the CPU also surges and hangs. What is the specific reason for this scenario?

The current writing method requires writing the password in plain text in the code. Is there a better way?

If I had to guess, for the reason:

  • the MongoDB node executed one find+update(upsert) per incoming item. That’s 10k individual TCP requests/responses and JSON (de)serialization.
  • 10k executions also means 10k node outputs too, one for each upsert
  • any additional node probably brings data cloning, so looping and waiting added even more to the problem.

for the password in plain text - you can pass these values to the docker via env variables and pick it up in the code with something like

const uri = $env.MONGODB_URI;

allow running external modules in code node with env var:
Where is this variable set?

According execute the command “export NODE_FUNCTION_ALLOW_EXTERNAL=*” in the container. And execute the workflow , workflow still reporting error: Cannot find module ‘mongodb’ [line 1]

Another question is why we need to introduce the MongoDB module since n8n already has a MongoDB node.

it was supposed to go to your docker compose file or docker run command instead of the container cli. The way you did it won’t accomplish anything.

because native node doesn’t support bulk operations, I thought I explain this earlier.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.