SQL Error with n8n and supabase

I am using the latest version of n8n (self-hosted by Hostinger) and am trying to write a few lines to a table in a PostgreSQL database (Supabase). The table has several columns, and three of these columns have a constraint key.

ALTER TABLE inn_output.message_dropped

ADD CONSTRAINT key_message_dropped_type_drop_id_user_id UNIQUE (type, drop_id, user_id);

I receive this error in the workflow:

“duplicate key value violates unique constraint ‘key_message_dropped_type_drop_id_user_id’

Key (type, drop_id, user_id)=(news, 387, 1) already exists.”

The strange thing is that I also get this error when the table is completely empty.

I empty the table with the following SQL:

TRUNCATE TABLE inn_output.message_dropped RESTART IDENTITY;

I have now spent three hours debugging and cannot find a solution.

Steps I have tried so far:

1. I used the PostgreSQL node instead of the Supabase node, same error.

2. Then I used the PostgreSQL node and executed an SQL directly instead of using the predefined errors, with the same result.

3. I reset the sequence several times, but that didn’t help either.

4. I executed the exact same SQL command in n8n and in Supabase, and it works in Supabase but not in n8n.

Now I’m at my wit’s end. Does anyone else have any useful ideas?

Can you provide your n8n workflow as JSON? Excl. all sensitive data of course?
But preferable with the rows you want to import.

Check how many items are going into your supabase or postgres node.
Is each item a row? Or are all rows in one item but there are multiple items?
If it’s more than one item, check if it causes duplicates. Your node gets executed for each sent item.
if you send more than one item, you most likely will trigger the SQL statement multiple times, causing this error.

3 Likes

You solved my problem. Although I told the AI to filter out duplicate IDs, there was still one duplicate ID in the 10 items. The nasty thing is that it’s not visible, and the error message put me on the wrong track. Thank you very much! That was the problem. Apparently, AI cannot perform this task reliably.

1 Like

To remove duplicates I would always use a code node, or the “Remove Duplicates” nodes. This saves you LLM cost and ensures a clean dataset before importing.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.