Problem executing Workflow - There might not be enough memory to finish the execution

Hello friends, how are you?!

I created a flow with the objective of updating the labels of my contacts on my Whatsapp Business from a Google Sheets spreadsheet.

Customers who were served less than 30 days ago receive the ACTIVE customer label, while customers who were served more than 30 days ago receive the INACTIVE customer label.

My initial idea would be to first consult the Z-API to return my Whatsapp Business contacts. I would then consult a Google Sheets spreadsheet that contains my clients’ information (including the date of the last consultation), and then update the labels.

So, I made this first flow below with this idea. However, the number of contacts on Whatsapp is very large and consequently the number of requests ended up blocking n8n.

In an attempt to reduce the number of requests, I did the opposite, I started by consulting the clients’ spreadsheet on Google Sheets. Next, using my clients’ phone number, I consult the Z-API to check if this contact is among my Whatsapp Business contacts and then I update the labels for this contact!

However, unfortunately, the number of requests is still large and n8n keeps crashing.

I ask: How can I further clean up my flow to minimize this crash?

Thinking about solving this problem, I created a column in the spreadsheet to update the date of the last synchronization. With that, I thought about creating a support flow to search for n8n Executions, evaluate the latest Failed executions and, with that, trigger a new flow trigger to update only the contacts that have not yet been synchronized.

Would this be the best solution? Would you have any other suggestions on how to resolve this problem?

Thank you in advance!

Information on your n8n setup

  • n8n version: 1.22.6
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: MacOS

I’m trying to paste the workflow here, but it’s exceeding the number of characters!

Also, I am unable to send the error image!

I receive the message: “Sorry, there was an error uploading Captura de Tela 2024-01-18 às 10.40.10.png. Please try again”

Unfortunately I won’t be able to demonstrate my error because I can’t send either my workflow or the image of the error I receive!

You need to split the dataset up into chunks. One approach is to receive a maximum number of new lines to process and run the process every 5 or 10 minutes until done. Another one is to use subworkflows to process the chunks as their memory will be released upon completion. You can find more information here.

And I’m not sure why the image wouldn’t upload. I wonder if it’s because of the filename having a special character…

1 Like

I managed to send the image just today!

Would sending a block of 200 lines, waiting 5 to 10 minutes and sending another block and continuing until the end in this way be enough?

If so, how could I separate the requests into blocks of 200 lines in this flow, for example?

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.