I’m currently setting up n8n on my local machine (MacBook Pro), and so far everything is working great. First of all, big thanks to everyone who contributes to this awesome tool!
Now I’m trying to take things a step further: Is it possible to set up the Pinecone Vector Store locally on my MacBook Pro and connect it to n8n (also running locally)?
I’m still pretty new to all of this, so if anyone could share a step-by-step (idiot-safe ) guide on how to make this work, I’d really appreciate it!
Thanks so much in advance
Looking forward to learning more and hopefully contributing back in the future!
The Qdrant alternative sounds really interesting, especially since it doesn’t have the same memory limitations as Pinecone Local (did not know).
Would you happen to know if there’s a detailed step-by-step guide on how to install and run Qdrant locally (on localhost) via Docker?
Sorry if it’s a basic question — I’m still pretty new to all this tech stuff
Really appreciate any pointers or links you could share!
If it’s just for.basic.testing, qdrant (I think pinecone as well has) have basic cloud instances you can use.
Register and in a few minutes you have an Instance. The configuration process Inside of n8n is the same for local / remote so you can easily switch whenever you decide.
Also one other point (the dashboard/web interface for online/local) is exactly the same. So I guess that’s nice for a beginner
So the general idea is that you make a connection between the N8N which runs on your Mac directly via npm and the qdrant that runs in your docker.
The qdrant has port 6333 exposed on “localhost”
You should be able to do something like http(s)://localhost:6333/dashboard to see if it’s running.
Also in there you can create a collection and create an API key
Once that is done. You go into your n8n and add a qdrant node and create credentials (it will need the API key and the URL which is the localhost thing from above without the /dashboard)
There is no direct python code needed to accomplish any of this.
Most of this is from memory so you need to see if it’s fits your puzzle
When I execute the Chatbot, I only get generic openAi answers not linked to uploaded PDF in qdrant vector store, even I configured to “retrieve”… Any suggestions why it might not work? Due to local installation? Wrong Vectors Configurations (Dot, Cosine,…)?