Understanding Supabase row creating and unique ids

Describe the problem/error/question

I’m processing RSS feeds and store each item as a new row in a Supabase table.

On the left is one example of a normalized feed, with the column for guid, which have unique values for each item. I’ve confirmed that no row exists in the table with the guid for the first item. Yet Supabase throws me an error for duplicate entires.

I expected the Supabase node to process one item in the RSS feed at a time. Am I getting this error because all 15 are batched and since there are already stored items among them? If so, should I use a loop to process the items one at a time? Or is there some other reason I get this error?

Information on your n8n setup

  • n8n version: 1.84.1
  • Database (default: SQLite): Supabase
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Selfhosted on Cloudron
  • Operating system: Debian 22.04

Hello !

If all of your guid are actually unique, there shouldn’t be any problem creating all your rows. Maybe you can pinpoint the problem using a loop and see at which item it breaks ?
Did you try re-creating a fresh table and retry the flow ?

Also, you said that “no row exists in the table with the guid for the first item”" →
If any of the item from your rss feed has the same guid as a row in your table, it will indeed stop the insert for every item.

What you would want to do would be either :

  • Get all the guid in your table and filter the rss items according to the items already present in your table.
  • Or loop in every item and configure the supabase create node with “continue on error” so it doesn’t stop the workflow on duplicate guid.

Hope this helps ! :slight_smile:

1 Like

Yes, in each batch of new items (from each of the processed RSS feeds) there is a mix of both new items and those already in the database.

When I started to build this workflow, I iterated over a JSON file with 15 RSS feeds, sending them one by one into a Read RSS node using a loop only to learn that the Read RSS node iterates over the JSON by itself.

But if I understand your reply, the Supabase nodes takes everything in the input and creates just one SQL statement? And since some of the items are already in the database, that cause that single statement to fail? Is that correct?

I also tried with using the Update command instead of Create, thinking that would either updating or creating a row depending on wether the guid is already in the database or not. But that didn’t work either. :slight_smile:

So, will try one of your solutions. Any suggestion on which one is better? The first leads to more calculations in n8n, while the second causes more calls to Supabase. (But I guess it so small loads that I shouldn’t even bother with which one is “the best”. :slight_smile: )

Thanks!

Yes I think it’s what it does !

For the “better” solution, the most rigorous one would be the first one as you are doing the filtering yourself.
The second one is simpler to implement, but it would continue on any error supabase encounters (ones that aren’t duplicate problems too), you could make a check of the error on error output and take further actions if this is something else than a duplicate issue.
But yeah you shouldn’t worry performance-wise :slight_smile:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.