How to efficiently extract large XML files (120MB+) in n8n Cloud?

Hi everyone :waving_hand: I’m working on a workflow to process Dataset XML files (~122 MB each).

The download and decompression steps work fine, but the workflow fails when trying to convert the binary XML into plain text.

:wrench: Workflow segment

  1. Download / Decompress

    β†’ The file apc18840407-20241231-86.xml is successfully decompressed.

xml_0
File Name: apc18840407-20241231-86.xml
File Extension: xml
Mime Type: application/xml
File Size: 122 MB

  1. Extract from File

    β†’ Converts the binary XML to plain text (Output: xmlText).

  2. Split XML (Code JS)

    β†’ Splits the XML into … blocks so each item can be processed individually.

:warning: Problem

The Extract from File step fails with this message:

β€œThe execution was interrupted, so the data was not saved.”

It looks like a resource limitation (time, storage, or memory), since the XML file is quite large.

:red_question_mark:Question

Is there a more efficient way to convert a large XML file into text without hitting resource limits in n8n Cloud?

Would it be better to read the XML in a streaming fashion, or is there a recommended pattern or node setup for handling large XML files (100 MB+)?

Any advice or practical example would be greatly appreciated :folded_hands: