Problems Upload CSV files via the HTTP Method node (Corruption/Incorrect formatting)


I was wondering if anybody had any experience sending CSV files as binary on the HTTP Method node. When i upload them to a Sharepoint site whatever MIME type i use the file is corrupted/incorrectly formatted.

The workflow downloads the file from S3 and then writes it to binary to rename it and then reads it back in to send to Sharepoint. When i check the file written to disk its fine so it must be the upload that is causing the issue

HTTP Method node code :-

“nodes”: [
“parameters”: {
“requestMethod”: “PUT”,
“url”: “={{$node[“Set2”].json[“SharepointSiteId”]}}/drive/items/{{$node[“Set2”].json[“SharepointDocumentOutputsId”]}}:/{{$binary.datanewfilename.fileName}}:/content”,
“jsonParameters”: true,
“options”: {
“bodyContentType”: “multipart-form-data”,
“bodyContentCustomMimeType”: “application/octet-stream”
“sendBinaryData”: true,
“binaryPropertyName”: “datanewfilename”,
“headerParametersJson”: “={\n"Content-Type”:“application/octet-stream”,\n"Authorization": “{{$node[“HTTP Request”].json[“token_type”]}} {{$node[“HTTP Request”].json[“access_token”]}}”\n}"
“name”: “HTTP Request3”,
“type”: “n8n-nodes-base.httpRequest”,
“typeVersion”: 1,
“position”: [
“continueOnFail”: true
“connections”: {}

When i upload PDF files they work perfectly so its just CSV but i have tried XLS and they do the same.

Any advice would be appreciated.

Our set-up is N8N (1 node) running in Docker on a t4g.small instance with the default container resource limits.

Database is Postgres running in Docker


That is quite weird that it works with PDF and not with CSVs. Can you try not setting the content type? When it’s not, the node can “guess” the content type instead of having “application/octet-stream” by default.

Hi @RicardoE105

Thanks for the reply. I have made the change you suggested but the file is still the same when its sent to Sharepoint.

Do you think having multipart-form-data could be causing it?


Hey @messi198310,

One question I have on this one, When you say it is corrupted what do you mean? I can see in your screenshot it is showing 1.93E+23 but the value of the cell that is cut out at the top looks to be 1.92xxxxxxx

If this is the issue I don’t think it is actually any kind of corruption and it is more than likely going to just be how Excel is displaying the data, If you were to download that file from sharepoint and open it in Notepad (other text editors are available) does it show the correct value?

Thanks for the message @jon

It appears that a header and footer are being inserted in to the file as part of the upload process. I have opened a CSV file in TextEdit and its in the data (see below). I’m not sure if there is anything i can pass in the request that would stop this for CSV and also not sure why PDF’s are un-affected.

Hey @messi198310,

That is odd, I had assumed that they were just headers from whatever was generating the CSV.

Had a quick look online and I can see others have had the same issue and it looked to be around the content type but I can’t find any solid answer that you have probably not already tried. I don’t have a Sharepoint site set up for testing (I will get one setup this week) but out of interest do you get the same issue if you upload the same CSV file to Onedrive?

Hi @jon

I have just tried the same thing with Postman and it uploads without the header and footer and works a treat.

I passed the CSV in the body as binary but didn’t specify a MIME type.

The following headers are returned.

I tried setting the Content-type to be the same in N8N but it still puts text/csv in the file inside the header.

Very odd.

Hey @messi198310,

I think that is where we try to automatically guess the content type although I would expect Sharepoint to still accept the request.

What happens if you set Postman to use text/csv as the content type?

Hi @jon

It appears to get ignored because the file is being sent as binary and Postman automatically chooses the correct one.

Sent a CSV with “Application/pdf” and PDF with “text/csv” and they both worked even though the types are incorrect.

Can you share the docs about the endpoint you are using?

Hey @messi198310,

If you send the same file to Pipedream does it give you the same issue?

Hi @RicardoE105

Please see below

Yeah, the data does not have to be sent using multipart-form data but rather sending the raw binary data in the body. Can you try what I did in the example below?

thanks @RicardoE105 Sorry for the late response. When i change it to raw/custom i get the following

I have tried turning off json/raw parameters and this then just adds “datanewfilename” into the content of the file.

Still confused why PDF’s work without any issue but these blasted CSV’s are having so many issues.

thanks again.

Ok just noticed that the endpoint OneDrive is trying to use to upload a file it’s the same endpoint you are trying to use, meaning you should be able to upload the file using OneDrive specifically the file:update operation.

@RicardoE105 Sorry for the late response but i want to close this off and prevent anybody else having the same issue.

We resolved the issue by changing the authentication method from being inside the header in the request to use the authentication method Header Auth. This then meant it was not a multipart-form request and the files is upload as expected.

Thanks for your continued support.

1 Like