Hello,
I’m using the guide from here: Creating triggers for n8n workflows using polling ⏲ – n8n Blog to create a simple flow to check the latest entry intro a postgres table.
The query itself has a ORDER BY id DESC and limit 1 (it gets latest id from the table).
I’m using Interval node (set to 1 second pulling) as a trigger and a function node as the one from the blog post.
const new_items = [];
// Get static data stored with the workflow
const data = this.getWorkflowStaticData("node");
data.ids = data.ids || [];
for (let i = items.length - 1; i >= 0; i--) {
// Check if data is already present
if (data.ids.includes(items[i].json.ID)) {
break;
} else {
// if new data then add it to an array
new_items.push({
json: {
id: items[i].json.ID,
name: items[i].json.Name,
email: items[i].json.Email
},
});
}
}
data.ids = items.map((item) => item.json.ID);
// return new items
return new_items;
The output I’m sending it as simple http request.
From what I observe, if i just turn on the workflow and no input to the sql table I only get one http request (which means that indeed the function works fine and if no new entries from the SQL no new output from the function).
However when I do have entries to the table, sometimes i do get more than 1 http request (with same id) at the end of the flow. This seems to suggest that the function doesn’t capture the old entries as “duplicates” all the time.
Can this be related to the short time interval and the fact that getWorkflowStaticData isn’t all the time updated in time?
Hi @gabbello, welcome to the community!
I am sorry to hear you’re having trouble.
A short polling interval could indeed lead to the behaviour you have reported. Static data is a part of your workflow data. Now if your workflow runs for longer than your polling interval, the update might not have been persisted to the internal n8n database by the time the next execution starts.
So if you require very short polling intervals you might want to consider a different data storage to handle the de-duplication. When using an SQL database you could consider simply adding a column such as “Processed by n8n” to your table and query/update it at the beginning of your workflow.
Thanks a lot for the fast reply. Yep, I’m aware fast pooling intervals are not ideal, and thanks for the suggested solution, but i don’t have access/ways to edit the DB structure for this flow.
Is there a way to ensure that the new flow doesn’t start until previous one finish (i do understand that this might mean longer than 1 second intervals). I’m very new to n8n, maybe there is a way to setup a kind of loop so at the end of the workflow i trigger a execution?
So there are a few possible workarounds.
The aforementioned logic could still work with a separate table or even an external database (if you can’t modify the original one), though the latter requires separate queries.
Is there a way to ensure that the new flow doesn’t start until previous one finish (i do understand that this might mean longer than 1 second intervals)
This idea specifically can be implemented, but is a bit hacky. You’d essentially need to use an undocumented API used by the n8n UI to get your currently running workflows and verify if the current workflow is already running. @Jon recently shared and example for this here: Check if the workflow is running through the API - #2 by Jon
So your workflow would start then check if it’s already running more than once. Combined with an IF node you could then decide whether to continue or not with the current execution.
1 Like