Hi there!
We often use n8n to generate CSV files, and thus a final node for us is the “Convert to File” node, with the CSV option selected.
Problem: CSV files generated are always in UTF8 encoding with BOM header, and we can not control the file encoding and BOM byte presence/absence if UTF8 is selected.
Current workaround: We re-convert the file into text using the Extract From Text File node, strip the BOM there (there’s a dedicated option), and store the file directly.
Suggested solution: Add an Encoding parameter, as well as a Strip BOM parameter to the Convert to File node, as the Extract From Text File node already does.
Thanks a lot for your attention and consideration about our request!
I created an account just to upvote this 
Super annoying!
I’m facing this right now as well. An advanced option to change the default encryption would be amazing.
Will try out your workaround for now @Paul_Mougel . Thanks!
I only see this happening when I use a different delimiter such as ‘;‘. In case of default comma delimiter, I’m having no issues.
I came here for exactly the same reason. 
But I am unable to implement the workaround. I have a decent amount of programming knowledge, but something as simple as manipulating a text file seems to be beyond my grasp.
I ended up cheating and having an Ai agent do it for me (claude).
Was just simple and now I have claude ready to go for other workflows 
No shame in that. I finally managed to get it working. I made a mistake in one of the nodes further down the stream where I accidentally mapped some of the fields into the output name. Hard to spot when you are a complete n00b like I am. Anyway, the solution/workaround outlined in this post does indeed do the trick.