Hi everyone
Iβm working on a workflow to process Dataset XML files (~122 MB each).
The download and decompression steps work fine, but the workflow fails when trying to convert the binary XML into plain text.
Workflow segment
-
Download / Decompress
β The file apc18840407-20241231-86.xml is successfully decompressed.
xml_0
File Name: apc18840407-20241231-86.xml
File Extension: xml
Mime Type: application/xml
File Size: 122 MB
-
Extract from File
β Converts the binary XML to plain text (Output: xmlText).
-
Split XML (Code JS)
β Splits the XML into β¦ blocks so each item can be processed individually.
Problem
The Extract from File step fails with this message:
βThe execution was interrupted, so the data was not saved.β
It looks like a resource limitation (time, storage, or memory), since the XML file is quite large.
Question
Is there a more efficient way to convert a large XML file into text without hitting resource limits in n8n Cloud?
Would it be better to read the XML in a streaming fashion, or is there a recommended pattern or node setup for handling large XML files (100 MB+)?
Any advice or practical example would be greatly appreciated ![]()