Using Postgres Parameterized query

I’m new to n8n (and loving it!).

This is a seeming simple workflow:

Cron => HttpRequest (csv) => PostgresDB

I got it working with the basic postgres node’s insert mode but i want to use the parameterized query to allow more complex postgres operations, primarily:

  • geometry “Point” generation
  • Ignoring duplicates

When i use the “$1” parameters i get:

ERROR: null value in column "id" violates not-null constraint

suggesting it nots getting a value (despite using an explicit “Set” node)

I’m missing something because it appears to have been added in this PR: Query parameters for Postgres (Parametrized Queries) by krynble · Pull Request #1577 · n8n-io/n8n · GitHub

It could be related to the “useQueryParams” parameter not being set correctly?

Keen to hear any thoughts/help you’ve got!

P.S. how do you do textboxes in this rich text editor?

Let me know if you need more info or i’ve been unclear

Edit: Adding “Upsert” postgres command

This workflow is…an attempt:

Workflow

{
“name”: “Update database”,
“nodes”: [
{
“parameters”: {},
“name”: “Start”,
“type”: “n8n-nodes-base.start”,
“typeVersion”: 1,
“position”: [
310,
220
]
},
{
“parameters”: {
“operation”: “executeQuery”,
“query”: “=INSERT into locationstest (id, event, location, city, startdate, enddate, lat, lng, added, updated) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)”,
“additionalFields”: {
“mode”: “independently”,
“queryParams”: “={{$json[“id”]}},{{$json[“event”]}},{{$json[“location”]}},{{$json[“city”]}},{{$json[“startDate”]}},{{$json[“endDate”]}},{{$json[“lat”]}},{{$json[“lng”]}},{{$json[“added”]}},{{$json[“updated”]}}”
}
},
“name”: “Postgres1”,
“type”: “n8n-nodes-base.postgres”,
“typeVersion”: 1,
“position”: [
1220,
60
],
“credentials”: {
“postgres”: “postgres-default-heroku”
}
},
{
“parameters”: {
“url”: “https://raw.githubusercontent.com/minhealthnz/nz-covid-data/main/locations-of-interest/august-2021/locations-of-interest.csv”,
“allowUnauthorizedCerts”: true,
“responseFormat”: “string”,
“jsonParameters”: true,
“options”: {}
},
“name”: “HTTP Request”,
“type”: “n8n-nodes-base.httpRequest”,
“typeVersion”: 1,
“position”: [
510,
60
]
},
{
“parameters”: {
“triggerTimes”: {
“item”: [
{
“mode”: “everyHour”
}
]
}
},
“name”: “Cron”,
“type”: “n8n-nodes-base.cron”,
“typeVersion”: 1,
“position”: [
310,
-110
]
},
{
“parameters”: {
“functionCode”: “console.log(‘start’)\nfunction CSVToArray( strData, strDelimiter ){\n // Check to see if the delimiter is defined. If not,\n // then default to comma.\n strDelimiter = (strDelimiter || “,”);\n\n // Create a regular expression to parse the CSV values.\n var objPattern = new RegExp(\n (\n // Delimiters.\n “(\\” + strDelimiter + “|\\r?\\n|\\r|^)” +\n\n // Quoted fields.\n “(?:\”([^\”](?:\"\"[^\"]))\"|" +\n\n // Standard fields.\n “([^\”\\" + strDelimiter + "\\r\\n]))"\n ),\n “gi”\n );\n\n\n // Create an array to hold our data. Give the array\n // a default empty first row.\n var arrData = [[]];\n\n // Create an array to hold our individual pattern\n // matching groups.\n var arrMatches = null;\n\n\n // Keep looping over the regular expression matches\n // until we can no longer find a match.\n while (arrMatches = objPattern.exec( strData )){\n\n // Get the delimiter that was found.\n var strMatchedDelimiter = arrMatches[ 1 ];\n\n // Check to see if the given delimiter has a length\n // (is not the start of string) and if it matches\n // field delimiter. If id does not, then we know\n // that this delimiter is a row delimiter.\n if (\n strMatchedDelimiter.length &&\n strMatchedDelimiter !== strDelimiter\n ){\n\n // Since we have reached a new row of data,\n // add an empty row to our data array.\n arrData.push( [] );\n\n }\n\n var strMatchedValue;\n\n // Now that we have our delimiter out of the way,\n // let’s check to see which kind of value we\n // captured (quoted or unquoted).\n if (arrMatches[ 2 ]){\n\n // We found a quoted value. When we capture\n // this value, unescape any double quotes.\n strMatchedValue = arrMatches[ 2 ].replace(\n new RegExp( “\”\"", “g” ),\n “\”"\n );\n\n } else {\n\n // We found a non-quoted value.\n strMatchedValue = arrMatches[ 3 ];\n\n }\n\n\n // Now that we have our value string, let’s add\n // it to the data array.\n arrData[ arrData.length - 1 ].push( strMatchedValue );\n }\n\n // Return the parsed data.\n return( arrData );\n };\n \nconsole.log(‘sut start’);\nvar sut = items[0].json;\nif(sut) {\n console.log(‘length> ‘+sut.length+’ <length’);\n console.log(‘type> ‘+(typeof sut)+’ <type’);\n console.log(‘keys> ‘+Object.keys(sut)+’ <keys’);\n// console.log(‘keys> ‘+Object.keys(sut.type)+’ <keys’);\n} else {\n throw ‘no sut or sut is empty!’\n}\nconsole.log(‘sut end’);\n\n// Code here will run only once, no matter how many input items there are.\n// More info and help: https://docs.n8n.io/nodes/n8n-nodes-base.function\n\n// Loop over inputs and add a new field called ‘myNewField’ to the JSON of each one\nvar res = [];\nvar matrix = CSVToArray(items[0].json.data);\n\nconsole.log('Matrix length: '+matrix.length);\nmatrix.slice(1).forEach((row) => {\n\nconsole.log(‘Were looping’);\n res.push({json: {\n id: row[0],\n Event: row[1] || ‘’,\n Location: row[2] || ‘’,\n City: row[3] || ‘’,\n Start: row[4] || ‘’,\n End: row[5] || ‘’,\n Advice: row[6] || ‘’,\n LAT: row[7] || ‘’,\n LNG: row[8] || ‘’,\n Added: row[9] || ‘’,\n Updated: row[10] || ‘’\n }});\n});\n\nconsole.log(JSON.stringify(res));\n\nconsole.log(‘end’);\nreturn res;\n\n"
},
“name”: “Function”,
“type”: “n8n-nodes-base.function”,
“typeVersion”: 1,
“position”: [
700,
60
]
},
{
“parameters”: {
“keepOnlySet”: true,
“values”: {
“string”: [
{
“name”: “id”,
“value”: “={{$json[“id”]}}”
},
{
“name”: “event”,
“value”: “={{$json[“Event”]}}”
},
{
“name”: “location”,
“value”: “={{$json[“Location”]}}”
},
{
“name”: “city”,
“value”: “={{$json[“City”]}}”
},
{
“name”: “startDate”,
“value”: “={{$json[“Start”]}}”
},
{
“name”: “endDate”,
“value”: “={{$json[“End”]}}”
},
{
“name”: “advice”,
“value”: “={{$json[“Advice”]}}”
},
{
“name”: “lat”,
“value”: “={{$json[“LAT”]}}”
},
{
“name”: “lng”,
“value”: “={{$json[“LNG”]}}”
},
{
“name”: “added”,
“value”: “={{$json[“Added”]}}”
},
{
“name”: “updated”,
“value”: “={{$json[“Updated”]}}”
}
],
“boolean”: []
},
“options”: {}
},
“name”: “Set”,
“type”: “n8n-nodes-base.set”,
“typeVersion”: 1,
“position”: [
1040,
60
]
},
{
“parameters”: {
“batchSize”: 1,
“options”: {}
},
“name”: “SplitInBatches”,
“type”: “n8n-nodes-base.splitInBatches”,
“typeVersion”: 1,
“position”: [
860,
60
]
}
],
“connections”: {
“Start”: {
“main”: [
[
{
“node”: “HTTP Request”,
“type”: “main”,
“index”: 0
}
]
]
},
“Cron”: {
“main”: [
[
{
“node”: “HTTP Request”,
“type”: “main”,
“index”: 0
}
]
]
},
“HTTP Request”: {
“main”: [
[
{
“node”: “Function”,
“type”: “main”,
“index”: 0
}
]
]
},
“Function”: {
“main”: [
[
{
“node”: “SplitInBatches”,
“type”: “main”,
“index”: 0
}
]
]
},
“Postgres1”: {
“main”: [
[]
]
},
“Set”: {
“main”: [
[
{
“node”: “Postgres1”,
“type”: “main”,
“index”: 0
}
]
]
},
“SplitInBatches”: {
“main”: [
[
{
“node”: “Set”,
“type”: “main”,
“index”: 0
}
]
]
}
},
“active”: true,
“settings”: {},
“id”: “1”
}

I ended up going with importing the raw data into varchar columns, then running a second Postgres execute query node to generate the points and remove the duplicates to get it working short term

Postgres Query:

delete from locations l
using 
    (SELECT id, recordadded
    FROM 
        (SELECT 
        id
        , recordadded
        , ROW_NUMBER() OVER(
	        PARTITION BY id 
    	    ORDER BY recordadded
    	) AS row_num
        FROM locations) t
        WHERE t.row_num > 1) dups 
        where dups.id = l.id and dups.recordAdded = l.recordadded;
       
update locations set latfloat = cast(lat as double precision), lngfloat = cast(lng as double precision) where lat != 'NA' and lng != 'NA' AND lat != '' AND lng != '';

update locations set geog = ST_SetSRID(ST_MakePoint(lngfloat, latfloat), 4326) 

This is just shoved into other columns on the table and has a bunch of data integrity issues.

I want to do an fancy INSERT with a parameterized query string parameters

Hey @KiwiKid

Happy to know you’re enjoying n8n!

I am not a PostgreSQL expert but I am the one who made those changes you mentioned.

Tomorrow I will be having a look at this and see if there is any issue. I will keep you posted.

1 Like

Hi @KiwiKid

I created a small working example that you can see below. I hope this helps clarify how you can use the parametrized queries. One important thing to note is that if you do not add RETURNING * to the end of your query the execution will stop, because without data n8n won’t proceed. An alternative is to set Always output data on the node settings.

1 Like

Thanks @krynble!
Great idea! Makes these queries alot easier.

I think its because i’m trying to process the output of the Function node directly.
Do you think that the parametrized queries will only process the output of “Set” nodes?
I can’t see any example without it.

I could just do a split in batches + set before the insert?

Just trying to work out the best way of doing this in general :slight_smile:

Anytime =)

It should work fine with Function node as well, as long as it’s outputting data correctly.

Here is an example using a function node, outputting data correctly:

1 Like