Read a big .txt and save data to PostgreSQL

Hello everyone!

i’m new using the n8n but already loved it…someone can help me with technique tips?

I read the n8n documentation and analyzed some examples and i know that is possible with n8n using “Read Binary File”, " Spreadsheet" and some node functions…

Roadmap:

  • Open .txt
  • Read one row per time
  • Catch value on specific position of row
  • Save to postgreSQL

.txt structure example:

01: XXXname01YY
02: XXXname02YY
03: XXXname03YY
04: XXXname04YY

the value that i need catch is:

  • Position: 04
  • Size: 06

But my question is, what is the best way to do it when i have 5 millions of row to be read and sliced the value, for example: the n8n workflow must read “line 01” and catch the data position at 04 and the next 06 characters (result: name01) to save in PostgreSQL and continue the execution…

Thanks in advance!

Hi @erickamoedo, welcome to the community!

That’s a tough one as n8n’s build in methods typically process files as a whole. So reading 5 million rows at once using the respective node and then applying filtering might result in memory problems.

You could consider reading this file using logic outside of n8n, for example by using sed through the Execute Command node. So something like this:

This would print only the fourth line of your file (to the stdout field) which you can then further process as needed (and use in the Postgres node for example):

This is assuming you’re using Linux, the n8n docker image or another system where sed is available.