Unable to read csv file that contains multi language

Describe the problem/error/question

I have a csv file that contains multi line with multi language. It’s having problem reading the first column as I’m getting undefined when calling that field data. How to fix this?

CSV: https://www.dropbox.com/scl/fi/iriz4wqog0zi9txn54djh/manga-info.csv?rlkey=iola4dugy1uwz6lbwc0e5ztzp&st=tx74kwi1&dl=0

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 2.2.3
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • Operating system: Ubuntu 24.04 LTS

Hi @Ruriko, welcome!

Where exactly is the problem?

I tested it on my end and it’s working, but here are a few clarifications:

  • The first row is treated as the header by default and is used to name the JSON properties.
  • Enable Include Empty Cells to see all columns (even the empty).

If you still want to access the first row as data, disable the header row:

Let me know if this isn’t the issue so I can understand better and help you more.

1 Like

Hmm, I can see that there is indeed an issue in the Set node:

{{ $json.alt_title }} isnt working but {{ $json['alt_title'] }} is working,

That’s weird, probably due to hidden or special characters in the name, not sure tbh.

edit: yes because of a hidden character, the key is not actually alt_title

For now, just use {{ $json['alt_title'] }}

Another fix to make {{ $json.alt_title }} work is enable this option Exclude Byte Order Mark (BOM) in Extract from File node:

1 Like

The csv was created in n8n so I tried resaving using Google Docs which did fixed the Set node problem. Since saving csv file in n8n adds weird encoding is there other methods to save a csv file with correct encoding?

Maybe resaving the file using Google Docs changed something,

If you’re using Convert to File to create the CSV, it should work as expected, it’s also possible that the data from source itself contain that special characters,

Either way, we know the issue and the fix now, and as mentioned in the last reply you can simply enable Exclude Byte Order Mark (BOM) in the Extract from File node, or in the set node fix.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.