The idea is:
I propose the addition of a node that is designed to free memory in n8n workflows after data has been processed. This node would optimize workflows with less usage of memory and avoiding out-of-memory exceptions, essentially leaving users to clear useless data from memory once they are no longer needed.
My scenario:
While use cases may vary, my primary requirement is due to developing a high-volume data synchronization framework for my industry. The framework synchronizes data daily (or on demand) from multiple sources into a PostgreSQL database for AI-driven queries and automation.
One of the biggest problems I’m facing is that certain HTTP API requests return big datasets that simply cannot be prefiltered before fetching. As a result, I need to fetch the full dataset, process it in n8n, and then filter it before inserting into the database. This creates huge memory usage, including:
- The initial API response, which can be up to 40MB of JSON.
- The processed data set, which then adds around 15MB of additional data.
- Memory used from storing processed data and database queries.
Currently, I rely on complex workarounds such as looping, splitting executions into sub-workflows, and managing execution states manually. Approaches that are cumbersome and difficult to organize especially without the folder sorting ability for workflows. If a dedicated node existed to explicitly clear memory of a node at specific points within a workflow, these workarounds would become unnecessary, leading to more efficient workflow execution.
I think it would be beneficial to add this because:
Memory related issues, including out-of-memory errors, have been a persistent challenge for n8n users over the years, evidenced by discussions in the forums. Implementing a memory management node would provide a direct and intuitive solution, improving workflow performance and stability across various use cases.