Hi! I’m having trouble reading very large JSON files, and I would like to know if it’s possible to read them using streams. Alternatively, is there an ideal technique for reading large files efficiently?
Information on your n8n setup
n8n version: 1.50.1
Database (default: SQLite): Postgre
n8n EXECUTIONS_PROCESS setting (default: own, main): own
Running n8n via (Docker, npm, n8n cloud, desktop app): Kubernetes
You can also try setting the environment variable N8N_DEFAULT_BINARY_DATA_MODE=filesystem which will save the binary data on disk rather than memory and the database. See our docs for more details.
If you’re stuck you can share your workflow in the post so the community can better understand your usecase
Tip for sharing your workflow in the forum
Pasting your n8n workflow
Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.
```
<your workflow>
```
Make sure that you’ve removed any sensitive information from your workflow and include dummy data or pinned data as much as you can!
Thank you so much, aya!
I was already breaking it down into sub-workflows, but the issue was with reading the file itself.
Your tip to use N8N_DEFAULT_BINARY_DATA_MODE=filesystem was incredibly helpful!