Polling SQL -> New data Vs old data

Dear Community,

I have to say that ive been using N8N for not very long but it has been extremely wonderful.

I have a small question regarding the “looking for new data” function found in the N8N polling database guide here

Everything works fine but for the first time i run the job it tries do run all 100 rows of data though my workflow which I don’t want. its sent hundreds of slack notifications and still going and running for 14 mins

Ive limited the number of items to 100 from the SQL query because sometimes we expect to see that many new rows sometimes.

but is there any way i can just store that 100 rows as old data and only look at the new ones? perhaps end the job at the function?

Describe the issue/error/question

What is the error message (if any)?

Please share the workflow

Share the output returned by the last node

but is there any way i can just store that 100 rows as old data and only look at the new ones? perhaps end the job at the function?

Hi @Josh-Ghazi, I am not quite sure I understand this requirement I am afraid. So you don’t want to store any existing data, just the 100 items that came fresh from your Microsoft SQL node?

This looks like what your code is already doing with data.ids = items.map((item) => item.json.ID);. items is the incoming data from your Microsoft SQL node, data is the object holding your static data stored with the workflow.

Just out of curiosity, is there a reason you’d want to keep this piece of information in the static data part of your workflow rather than directly in your database? It seems easier to simply add a column “processed by n8n” to your existing table which you can use directly in your query (and update after sending a Slack message).

1 Like

Very very good and practical solution, I will do this so that we don’t have to keep building the static data every time.

1 Like

Just an Update, I created another table in the SQL to check whether or not the notification has already been sent and the workflow checks this data and updates it accordingly.

So the best part about this is that i can set rules, so that if the workflow for that particular row could not be completed successfully it will not update the notified field, so the next time the workflow runs again it will retry the failed rows until they succeed.

Thanks to @MutedJam for all the help !!!