I have a workflow running for weeks and somehow it stops working giving me a 413 error. The issue is generated with Elasticsearch node. I have checked the memory availability in the server and we have plenty. Therefore, could you have any clue why this may be happening? Please find attached the workflow.
GA and Google Sheets nodes work just fine. Any help is much appreciated.
Thanks for your quick answer. Checking your suggestions:
I am triggering around 700 rows of information every day. I tried dividing it in chunks but still the error appears.
Elasticsearch allows you to put and pull 10.000 rows per time.
It seems like the possible solutions may not be efective for the purpose, or perhaps I am interpreting it in the wrong way. Is there any other idea I may test?
It matters less how many rows get added, but rather the size of the data of the whole request combined. So 10k small rows will probably work (because < x), but 1 row with a lot of data would fail (because > x).