Help vector with db optimization

I’m trying to load data from a bunch of SQL tables into a vector database so I can power a chatbot with it. The issue I’m running into is that my current workflow fetches data from each table one by one and merges it all into one big JSON. I’d rather keep each table’s data separate and add some metadata to each one.
Also, when I try to load a pretty large dataset (38k records) into the vector store, it takes forever and usually ends up erroring out with something like “maximum call stack size exceeded.” Any ideas on how to make this faster and more efficient?

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
3 Likes

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.