I’ve had some workflows that memorize previous records going through the Loop node and for some reason a second or third test remembers those records (coming from a Google Sheets lookup, and iterates through those as well. I’m assuming it’s memory, but for some reason the iteration count and found count is much larger than the row count.
I was thinking there is a way to void the memory. Also, sometimes this memory affects future Google Sheets lookups as well. When I do a lookup, that should mean it’s forgetting previous data, not bring it along to the next lookup and append them.
What is the error message (if any)?
None
Please share your workflow
Google Sheets looks up rows (example 10 rows) -> Loop (can iterate past 10 for some reason)
Share the output returned by the last node
Usually there is no output but a row manipulation of some sort.
Information on your n8n setup
n8n version:
1.81.4
Database (default: SQLite):
SQLIte
n8n EXECUTIONS_PROCESS setting (default: own, main):
main
Pinned data only makes sense in testing environment. It is ignored in production. You can employ Code nodes instead.
But I guess the error is logical here, esp. given your note that number of loop executions exceeds number of row. To figure this out having some data would be helpful.
Could you pin the outputs from AI and Google Sheet reading nodes and repost the workflow?
OK, pinned data aside, I’ve seen this happen before. I don’t think I can replicate it. This was a general question for when I need to flush previous memory in the workflow from upstream.
I’ve tried a Code node and it didn’t take. The “No Operation. do nothing!” node was an effort into trying to clear this.
On that note that it’s a general operations question, what would your recommend for me achieving this while moving forward without records?
I will take a look later today and if I find it’s happening, I can post some more info. Since this was some time ago, I don’t remember much about this.
Update: Ya I can’t even find that workflow. But the question remains.