I am retrieving a large amount of items from action network (using pagination), then I separate the pages into individual items, and write them into a spreadsheet (one row per item):
Unfortunately, I am running out of memory. If I understand correctly, this is because N8N tries to retrieve the entire dataset (all pages) at once.
I am sure there is a smarter way to do that, e.g. processing one page at a time, but I can’t figure out how to do it. Could someone give me a starting point?
Uh, this is annoying. I can’t use HTTP filters, because I truly want all the data.
I was hoping that since there is already an automation to handle pagination, this would be as simple as telling N8N to process pages one at a time instead of merging everything into one giant batch :-(.
I guess writing the last retrieved page into some datastore and resuming from there would work, but this feels very hacky. For example, how do I tell N8N to re-run the workflow until I’ve received all the data?
I am not sure if I completely understand your scenario but I had a case where I needed to retrieved a large amount of items from pimcor herein I kept the whole process in loop till I don’t received an empty return body from the http request node…