Larger Binary File Causing Error when sending POST Request

Any binary over 16kb results in an error of:

Your request is invalid or could not be processed by the service
Property "file" is required.

Everything is the same between the two, except for the size of the image.

Hi @Casey_Tokarchuk

files under 16kb fit in a small memory buffer so they survive http redirects. bigger files use a stream. streams can only be read once.

if your akeneo url triggers any redirect like a missing trailing slash or a slight subdomain change — n8n empties the stream on the first try. the redirected second request goes out completely empty. that is why your trace shows dataSize: 0 and throws that 401 token error.

update the url in your http request node to the exact final destination so it doesn’t redirect at all. double check your subdomains and trailing slashes.

if akeneo actually forces the redirect on purpose, split the process. turn ‘follow redirects’ off in the node settings. catch the new url it returns and send the file there using a second http node.

usually fixing the base url sorts it out from what i’ve seen.

How to Upload a File in n8n
this video walks through handling and troubleshooting binary file uploads across different nodes if you want to double-check your exact parameter mapping.

Hi, thanks for the quick and helpful response!

When I turn off redirects, I still get the same error: “Property “file” is required.“

Furthermore, when I send the request using Postman to the same endpoint with “follow redirects“ turned off, it still works.

@Casey_Tokarchuk
since you’re on docker, n8n defaults to saving files to disk and streaming them out to save memory. but streaming drops the content-length header. it sends the file in chunks instead. akeneo’s api rejects chunked requests and drops the payload entirely.

postman works because it calculates the file size first.

easiest fix is adding N8N_DEFAULT_BINARY_DATA_MODE=default to your docker env vars. restart the container. that forces n8n to hold the file in memory, calculate the exact size, and send the header just like postman does.

usually fixes the akeneo upload instantly from what i’ve seen. just watch your ram if you start pushing massive video files through your workflows later on.

Stable YouTube Uploads with HTTP Requests
this video shows a custom code workaround for uploading really huge files if you ever hit those memory limits.

You said by default it streams, so then should we be switching N8N_DEFAULT_BINARY_DATA_MODE from default to filesystem? According to this link, filesystem allows larger files

Hi @Casey_Tokarchuk
Hope you’re doing well.
This usually happens because of a redirect dropping the multipart body along the way.
If there’s a redirect happening, grab the final URL from the location header and send the request directly to that URL instead of relying on follow redirect. That way the file field won’t get lost in the process. Also double-check that your Form-Data field name is exactly file, it has to match perfectly, even a small typo there will break it.

The main fix @Casey_Tokarchuk is adding responseFormat: "file" to your HTTP Request2 node’s options, matching what HTTP Request3 already has. This will make both small and large files work consistently.

Same error when I switch responseFormat to “file“

filesystem is already set to default. Was wondering if we need to switch to filysystem

in n8n v2 they completely removed the old memory mode. the system forces everything to the filesystem to stream files instead. so my previous env var fix won’t actually work for you.

What worked was converting the data to Base64 before sending

This is the code node I had to enter after getting the image response. Not sure if this is the best way but it works. The root of the issue is that Akeneo does not support streaming in API calls, not sure if anyone has a better workaround than this?

Instead of pushing large Base64 payloads directly to Akeneo, I would first upload the file to an external storage service such as S3 or another object storage solution. After that, I would only send the public or signed URL to Akeneo, assuming the endpoint supports referencing external files.
Hope this helps @Casey_Tokarchuk