Help needed: Error with langchain SQL Agent

Describe the problem/error/question

I am trying to use langchain SQL agent for my Postgres database, but the error keeps popping up the SQL agent can’t read properties of undefined (reading ‘text’) and I’m not sure how to mitigate it.

What is the error message (if any)?

Problem in node ‘Agent‘
Cannot read properties of undefined (reading 'text')

TypeError: Cannot read properties of undefined (reading 'text')
    at NoOpOutputParser.parseResult (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/schema/output_parser.cjs:47:42)
    at NoOpOutputParser.parseResultWithPrompt (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/schema/output_parser.cjs:20:21)
    at LLMChain._getFinalOutput (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/chains/llm_chain.cjs:96:55)
    at LLMChain._call (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/chains/llm_chain.cjs:126:42)
    at LLMChain.call (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/chains/base.cjs:104:28)
    at LLMChain.predict (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/chains/llm_chain.cjs:142:24)
    at ZeroShotAgent._plan (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/agents/agent.cjs:234:24)
    at AgentExecutor._call (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/agents/executor.cjs:144:26)
    at AgentExecutor.call (/usr/local/lib/node_modules/n8n/node_modules/.pnpm/[email protected]_@[email protected]_@[email protected]_@huggingface+infer_bjcifbwga3ntgxrzz4r5mlqi6i/node_modules/langchain/dist/chains/base.cjs:104:28)
    at Object.sqlAgentAgentExecute (/usr/local/lib/node_modules/n8n/packages/@n8n/nodes-langchain/dist/nodes/agents/Agent/agents/SqlAgent/execute.js:61:26)

Please share your workflow

(Note: when setting up postgres credentials, I fill into “host” with value: host.docker.internal … it is because I used Docker and postgres to host my n8n instance.)

Share the output returned by the last node

Same as error code.

Information on your n8n setup

  • n8n version: n8n AI beta version 1.11.0
  • Database (default: SQLite):: PostgreSQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system: Windows 11

Anyone here had used sql agent in n8n before? Thanks in advance.

Hey @TAN_YONG_SHENG,

The node seems to be ok for me, Can you try running it without the nodes before it and see if that changes anything?

Hi @Jon

Thanks for the reply. Yup, the same error again if using your workflow (by just importing your workflow json file)

Is it because any setting error for my Postgres database setup?

Hey @TAN_YONG_SHENG,

It shouldn’t be as long as that connection works, Can you try and repull the image to see if that helps?

I used docker-compose pull to upgrade docker image so that n8n AI beta version to be 1.14.0, but ya, the same error still occurs:

The error code here:


e

Btw, here is my docker-compose.yml file:

(which I basically get from here: https://github.com/n8n-io/n8n/tree/master/docker/compose/withPostgres, and the only amendment is that I change the docker image to “docker.n8n.io/n8nio/n8n:ai-beta” instead of “docker.n8n.io/n8nio/n8n”)

name: ${COMPOSE_PROJECT_NAME}
volumes:
  db_storage:
  n8n_storage:

services:
  postgres:
    image: postgres:11
    restart: always
    environment:
      - POSTGRES_USER
      - POSTGRES_PASSWORD
      - POSTGRES_DB
      - POSTGRES_NON_ROOT_USER
      - POSTGRES_NON_ROOT_PASSWORD
    volumes:
      - db_storage:/var/lib/postgresql/data
      - ./init-data.sh:/docker-entrypoint-initdb.d/init-data.sh
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
      interval: 5s
      timeout: 5s
      retries: 10

  n8n:
    # use n8n ai beta version
    image: docker.n8n.io/n8nio/n8n:ai-beta # docker.n8n.io/n8nio/n8n
    restart: always
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
      - DB_POSTGRESDB_USER=${POSTGRES_NON_ROOT_USER}
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_NON_ROOT_PASSWORD}
    ports:
      - 5678:5678
    links:
      - postgres
    volumes:
      - n8n_storage:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy

init-data.sh

#!/bin/bash
set -e;


if [ -n "${POSTGRES_NON_ROOT_USER:-}" ] && [ -n "${POSTGRES_NON_ROOT_PASSWORD:-}" ]; then
	psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
		CREATE USER ${POSTGRES_NON_ROOT_USER} WITH PASSWORD '${POSTGRES_NON_ROOT_PASSWORD}';
		GRANT ALL PRIVILEGES ON DATABASE ${POSTGRES_DB} TO ${POSTGRES_NON_ROOT_USER};
	EOSQL
else
	echo "SETUP INFO: No Environment variables given!"
fi

.env file

COMPOSE_PROJECT_NAME = n8n

POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=n8n

POSTGRES_NON_ROOT_USER=nonrootuser
POSTGRES_NON_ROOT_PASSWORD=nonrootuser

@oleg any ideas what could be going on here?

@TAN_YONG_SHENG Can you please share logs view of the agent?

What if you try to do a simple query in a regular(non agent) Postgres node, are you able to execute it? Do you see any logs in your Postgress docker container?

  1. Can you please share logs view of the agent? (only agent node & google Palm chat model)

Raw json of input:

{
  "messages": [
    {
      "lc": 1,
      "type": "constructor",
      "id": [
        "langchain",
        "schema",
        "HumanMessage"
      ],
      "kwargs": {
        "content": "You are an agent designed to interact with an SQL database.\nGiven an input question, create a syntactically correct sqlite query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 10 results using the LIMIT clause.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for a the few relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return \"I don't know\" as the answer.\n\nquery-sql: Input to this tool is a detailed and correct SQL query, output is a result from the database.\n  If the query is not correct, an error message will be returned.\n  If an error is returned, rewrite the query, check the query, and try again.\ninfo-sql: Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables.\n    Be sure that the tables actually exist by calling list-tables-sql first!\n\n    Example Input: \"table1, table2, table3.\nlist-tables-sql: Input is an empty string, output is a comma-separated list of tables in the database.\nquery-checker: Use this tool to double check if your query is correct before executing it.\n    Always use this tool before executing a query with query-sql!\n\nUse the following format in your response:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [query-sql,info-sql,list-tables-sql,query-checker]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: which campaign has the highest conversion?\nThought: I should look at the tables in the database to see what I can query.\n",
        "additional_kwargs": {}
      }
    }
  ],
  "options": {
    "stop": [
      "\nObservation: "
    ],
    "promptIndex": 0
  }
}

Raw json of output

{
  "response": {
    "generations": [],
    "llmOutput": {
      "filters": [
        {
          "reason": "OTHER"
        }
      ]
    }
  }
}
  1. Can you please share logs view of the agent? (full workflow)

The log is similar to above one…

These are the logs inside my Docker Postgres database.

2023-10-31 18:50:44 
2023-10-31 18:50:44 PostgreSQL Database directory appears to contain a database; Skipping initialization
2023-10-31 18:50:44 
2023-10-31 22:38:52 
2023-10-31 22:38:52 PostgreSQL Database directory appears to contain a database; Skipping initialization
2023-10-31 22:38:52 
2023-10-31 18:50:46 2023-10-31 10:50:46.198 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2023-10-31 18:50:46 2023-10-31 10:50:46.200 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2023-10-31 18:50:46 2023-10-31 10:50:46.211 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-10-31 18:50:46 2023-10-31 10:50:46.421 UTC [26] LOG:  database system was shut down at 2023-10-31 10:50:11 UTC
2023-10-31 18:50:46 2023-10-31 10:50:46.548 UTC [1] LOG:  database system is ready to accept connections
2023-10-31 18:52:28 2023-10-31 10:52:28.535 UTC [132] ERROR:  permission denied to create extension "uuid-ossp"
2023-10-31 18:52:28 2023-10-31 10:52:28.535 UTC [132] HINT:  Must be superuser to create this extension.
2023-10-31 18:52:28 2023-10-31 10:52:28.535 UTC [132] STATEMENT:  CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
2023-10-31 20:16:38 2023-10-31 12:16:38.144 UTC [415] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-10-31 20:16:38 2023-10-31 12:16:38.144 UTC [415] DETAIL:  Key ("workflowId", name)=(BTjdV55kBTPYcz6i, data_loaded) already exists.
2023-10-31 20:16:38 2023-10-31 12:16:38.144 UTC [415] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-10-31 20:18:47 2023-10-31 12:18:47.237 UTC [601] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-10-31 20:18:47 2023-10-31 12:18:47.237 UTC [601] DETAIL:  Key ("workflowId", name)=(BTjdV55kBTPYcz6i, data_loaded) already exists.
2023-10-31 20:18:47 2023-10-31 12:18:47.237 UTC [601] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-10-31 22:29:24 2023-10-31 14:29:24.613 UTC [1] LOG:  received fast shutdown request
2023-10-31 22:29:24 2023-10-31 14:29:24.628 UTC [1] LOG:  aborting any active transactions
2023-10-31 22:29:24 2023-10-31 14:29:24.659 UTC [1] LOG:  background worker "logical replication launcher" (PID 32) exited with exit code 1
2023-10-31 22:29:24 2023-10-31 14:29:24.668 UTC [27] LOG:  shutting down
2023-10-31 22:29:24 2023-10-31 14:29:24.823 UTC [1] LOG:  database system is shut down
2023-10-31 22:38:52 2023-10-31 14:38:52.875 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2023-10-31 22:38:52 2023-10-31 14:38:52.877 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2023-10-31 22:38:52 2023-10-31 14:38:52.889 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-10-31 22:38:52 2023-10-31 14:38:52.970 UTC [26] LOG:  database system was shut down at 2023-10-31 14:29:24 UTC
2023-10-31 22:38:53 2023-10-31 14:38:53.021 UTC [1] LOG:  database system is ready to accept connections
2023-10-31 22:39:11 2023-10-31 14:39:11.844 UTC [58] ERROR:  permission denied to create extension "uuid-ossp"
2023-10-31 22:39:11 2023-10-31 14:39:11.844 UTC [58] HINT:  Must be superuser to create this extension.
2023-10-31 22:39:11 2023-10-31 14:39:11.844 UTC [58] STATEMENT:  CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
2023-10-31 22:40:31 2023-10-31 14:40:31.989 UTC [170] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-10-31 22:40:31 2023-10-31 14:40:31.989 UTC [170] DETAIL:  Key ("workflowId", name)=(BTjdV55kBTPYcz6i, data_loaded) already exists.
2023-10-31 22:40:31 2023-10-31 14:40:31.989 UTC [170] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:36 2023-10-31 16:20:36.212 UTC [7525] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:36 2023-10-31 16:20:36.212 UTC [7525] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:36 2023-10-31 16:20:36.212 UTC [7525] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:38 2023-10-31 16:20:38.003 UTC [7534] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:38 2023-10-31 16:20:38.003 UTC [7534] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:38 2023-10-31 16:20:38.003 UTC [7534] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:39 2023-10-31 16:20:39.205 UTC [7525] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:39 2023-10-31 16:20:39.205 UTC [7525] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:39 2023-10-31 16:20:39.205 UTC [7525] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:40 2023-10-31 16:20:40.312 UTC [7534] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:40 2023-10-31 16:20:40.312 UTC [7534] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:40 2023-10-31 16:20:40.312 UTC [7534] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:42 2023-10-31 16:20:42.125 UTC [7525] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:42 2023-10-31 16:20:42.125 UTC [7525] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:42 2023-10-31 16:20:42.125 UTC [7525] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:46 2023-10-31 16:20:46.037 UTC [7534] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:46 2023-10-31 16:20:46.037 UTC [7534] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:46 2023-10-31 16:20:46.037 UTC [7534] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:46 2023-10-31 16:20:46.983 UTC [7525] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:46 2023-10-31 16:20:46.983 UTC [7525] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:46 2023-10-31 16:20:46.983 UTC [7525] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:47 2023-10-31 16:20:47.946 UTC [7534] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:47 2023-10-31 16:20:47.946 UTC [7534] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:47 2023-10-31 16:20:47.946 UTC [7534] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:20:48 2023-10-31 16:20:48.807 UTC [7525] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:20:48 2023-10-31 16:20:48.807 UTC [7525] DETAIL:  Key ("workflowId", name)=(4qkQ2Apyr1oBg5nM, data_loaded) already exists.
2023-11-01 00:20:48 2023-10-31 16:20:48.807 UTC [7525] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:22:32 2023-10-31 16:22:32.552 UTC [1] LOG:  received fast shutdown request
2023-11-01 00:22:32 2023-10-31 16:22:32.580 UTC [1] LOG:  aborting any active transactions
2023-11-01 00:22:32 2023-10-31 16:22:32.743 UTC [1] LOG:  background worker "logical replication launcher" (PID 32) exited with exit code 1
2023-11-01 00:22:32 2023-10-31 16:22:32.841 UTC [27] LOG:  shutting down
2023-11-01 00:22:33 2023-10-31 16:22:33.483 UTC [1] LOG:  database system is shut down
2023-11-01 00:27:42 2023-10-31 16:27:42.109 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
2023-11-01 00:27:42 2023-10-31 16:27:42.112 UTC [1] LOG:  listening on IPv6 address "::", port 5432
2023-11-01 00:27:42 2023-10-31 16:27:42.133 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-11-01 00:27:42 2023-10-31 16:27:42.398 UTC [26] LOG:  database system was shut down at 2023-10-31 16:22:33 UTC
2023-11-01 00:27:42 2023-10-31 16:27:42.508 UTC [1] LOG:  database system is ready to accept connections
2023-11-01 00:28:15 2023-10-31 16:28:15.080 UTC [74] ERROR:  permission denied to create extension "uuid-ossp"
2023-11-01 00:28:15 2023-10-31 16:28:15.080 UTC [74] HINT:  Must be superuser to create this extension.
2023-11-01 00:28:15 2023-10-31 16:28:15.080 UTC [74] STATEMENT:  CREATE EXTENSION IF NOT EXISTS "uuid-ossp"
2023-11-01 00:28:59 2023-10-31 16:28:59.062 UTC [143] ERROR:  duplicate key value violates unique constraint "pk_workflow_statistics"
2023-11-01 00:28:59 2023-10-31 16:28:59.062 UTC [143] DETAIL:  Key ("workflowId", name)=(BTjdV55kBTPYcz6i, data_loaded) already exists.
2023-11-01 00:28:59 2023-10-31 16:28:59.062 UTC [143] STATEMENT:  INSERT INTO "public"."workflow_statistics"("count", "latestEvent", "name", "workflowId") VALUES ($1, $2, $3, $4)
2023-11-01 00:27:41 
2023-11-01 00:27:41 PostgreSQL Database directory appears to contain a database; Skipping initialization
2023-11-01 00:27:41

Okay, I tried to play along with the regular Postgres node.

Workflow 1: “insert” did work in that it successfully created a table called n8n_analytics and then inserted data into it based on the columns assigned.

Yet, the workflow 2: “upsert” did not work as expected.

  • Full workflows with Postgres as below:
  • Workflow 2 error as below:

Let me try to open another trial n8n cloud and see if my workflow working on the cloud.

Hey @TAN_YONG_SHENG,

Cloud and self hosted will be mostly the same there is no difference in the n8n version. The upsert error looks like it could be resolved if you changed Map Automatically to map manually but if your table has an auto generated field it will also cause an error so you would need to use the execute query option instead.

Hi @Jon, thanks a lot for the explanation

May I ask if what is the auto-generated field? I would assume it is the auto-increment field (e.g., id field that will automatically increase by 1 every new record is inserted)

Here is the structure of my table columns:

But after changing from “Map automatically” to “Map manually”, it is still unable to detect the columns inside the postgres table.

Hey @TAN_YONG_SHENG,

It would be any field that Postgres is setting the value of without you needing to input it. Can you export the schema for that table and I will take a look, I suspect it is failing because we might expect some kind of primary key so that could be a bug in the Postgres node.

I would recommend using the execute query option which should work and unblock that part of your process.

I see. Thanks for the follow-up.

Here is the schema of the table:

-- Table: public.n8n_analytics

-- DROP TABLE IF EXISTS public.n8n_analytics;

CREATE TABLE IF NOT EXISTS public.n8n_analytics
(
    "Platform" character varying(40) COLLATE pg_catalog."default",
    "Campaign" character varying(400) COLLATE pg_catalog."default",
    "Click" integer,
    "Conversion" integer,
    "Feedback" text COLLATE pg_catalog."default"
)

TABLESPACE pg_default;

ALTER TABLE IF EXISTS public.n8n_analytics
    OWNER to postgres;

Seems like I could use this SQL Agent with Postgres…

Currently I am trying to make it to do similarity searches for blog articles in my database when people are interested in certain topics.

But, it seems like it hallucinates and outputs the article lists not in my Postgres database… Maybe need more time for me to investigate, and just try my luck by putting it here first

I’m glad to hear if there Is any better way to do this similarity search for article suggestions via n8n.