I am currently setting up a workflow to process 10,000 items using the Salesforce Triggered Send feature. However, I am facing a critical issue where the workflow behaves completely differently depending on how it’s triggered.
My Setup:
Main Workflow: Schedule Trigger → Fetch Data → Call Sub-workflow.
Sub-Workflow: Receives the data → Uses a Loop node (Batch of 50) → Sends to Salesforce API → Updates Google Sheets.
Note: I am trying to process the 10,000 items in chunks of 2,500.
The Problem:
Manual Execution (Execute Workflow button): When I run it manually, it manages to process the 2,500 items. It is extremely heavy and sometimes throws memory-related errors, but the Salesforce sending actually happens.
Active Execution (Schedule Trigger): When I activate the workflow and let the Schedule Trigger run it automatically, absolutely nothing is sent. The execution log shows an “Error” almost instantly (e.g., “Crashed in 38ms”), with no red error nodes on the canvas. It seems the server completely crashes before it even starts processing the sub-workflow.
My Questions:
Why does the active Schedule Trigger fail completely and instantly, whereas the manual execution can at least handle the 2,500 items? Is there a difference in how memory/payloads are handled between manual and production (active) runs?
What is the best practice/architecture in n8n for handling a massive payload (10,000 items) for Salesforce bulk sending without crashing the main workflow (OOM)?
Are you using a self-hosted instance? Regular/queue mode?
Did you also publish the sub workflow? Without that, you will get an Unpublished error.
Use a sub workflow without the Loop node; do not use the Loop node at all, it’s very greedy for memory. You can use a sub workflow instead of the Loop node.
Best architecture:
Main WF: Set initial config (first page to fetch, the size of the batch) >> Call Sub WF >> Wait until it finishes, check how many items were processed. If fewer than have been defined in the config, then Exit, otherwise Call Sub WF again with the next page (backlink the node).
Sub WF (pagination): Get the page from the source >> process it, send to Salesforce, update GSheets, etc >> return to the main WF only how many items were processed and a next page number (or next link, depending on the API).
how ir will look with an example.
Config:
batch_size = 100
page_num = 1
Calling Sub WF (batch_size=100, page_num=1). It returned: processed_items=100,next_page=2. Proceed as we got the same number of results as requested.
Calling Sub WF (batch_size=100, page_num=2). It returned: processed_items=95,next_page=3.
Exit, as we got 95 results.
That logic will consume very little memory, as the main “heavy” part is within the sub workflow, and the memory flushes after each Sub workflow execution ends. As we pass back only metrics, we may process hundreds of thousands of items.
But disable the saving executions option for the sub WF. Keep only the failed executions for it; otherwise, your database may quickly become full.
The instant crash you’re seeing is likely a memory-related error n8n handles manual and production executions differently; when triggered automatically, the system tries to process that massive 10,000-item payload all at once, which can hit the memory limit immediately.
To keep things stable, I’d recommend moving away from “bulk” fetching:
Use Pagination: Instead of grabbing 10,000 items in one go, use the pagination settings in your fetch node to pull data in smaller, manageable chunks.
Split Out Node: Use the Split Out node to break the data into individual items or smaller batches, which is much easier on the system’s memory.
Queue Mode: If you’re self-hosting, look into Queue Mode to help distribute these heavy payloads across multiple workers.
It’s all about batching! Have you checked if the API you’re fetching from supports pagination?