Reading big files

Hi! I’m having trouble reading very large JSON files, and I would like to know if it’s possible to read them using streams. Alternatively, is there an ideal technique for reading large files efficiently?

Information on your n8n setup

  • n8n version: 1.50.1
  • Database (default: SQLite): Postgre
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Kubernetes
  • Operating system: Linux

Hi @eduardomoya,

Welcome to the community! There are several ways you can go about this:

If you’re stuck you can share your workflow in the post so the community can better understand your usecase :sun_with_face:

Tip for sharing your workflow in the forum

Pasting your n8n workflow


Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.

```
<your workflow>
```

Make sure that you’ve removed any sensitive information from your workflow and include dummy data or pinned data as much as you can!


Hope that helps!

2 Likes

Thank you so much, aya!
I was already breaking it down into sub-workflows, but the issue was with reading the file itself.
Your tip to use N8N_DEFAULT_BINARY_DATA_MODE=filesystem was incredibly helpful! :slight_smile:

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.