Hi n8n Community,
I’m currently setting up n8n on my local machine (MacBook Pro), and so far everything is working great. First of all, big thanks to everyone who contributes to this awesome tool!
Now I’m trying to take things a step further:
Is it possible to set up the Pinecone Vector Store locally on my MacBook Pro and connect it to n8n (also running locally)?
I’m still pretty new to all of this, so if anyone could share a step-by-step (idiot-safe
) guide on how to make this work, I’d really appreciate it!
Thanks so much in advance 
Looking forward to learning more and hopefully contributing back in the future!
Cheers
Information on your n8n setup
- n8n version:
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (npm, desktop app (localhost)):
- Operating system: MacBook Pro Intel
Hi, there is a local version (via docker) of pinecone as described here : Local development with Pinecone Local - Pinecone Docs
It seems it has serious limitations as it is memory only.
The other alternative would be qdrant vector store which also provided via a docker image and which has no such limitation as far as I know
Reg
J.
Hi J.,
Thanks a lot for your helpful reply!
The Qdrant alternative sounds really interesting, especially since it doesn’t have the same memory limitations as Pinecone Local (did not know).
Would you happen to know if there’s a detailed step-by-step guide on how to install and run Qdrant locally (on localhost) via Docker?
Sorry if it’s a basic question — I’m still pretty new to all this tech stuff 
Really appreciate any pointers or links you could share!
Hi, no worries.
Here you can find the starter info.
It’s easy to setup
If it’s just for.basic.testing, qdrant (I think pinecone as well has) have basic cloud instances you can use.
Register and in a few minutes you have an Instance. The configuration process Inside of n8n is the same for local / remote so you can easily switch whenever you decide.
Also one other point (the dashboard/web interface for online/local) is exactly the same. So I guess that’s nice for a beginner
Reg
J
Hi J.,
I was able to " Download and run" with the Terminal.
How do I “Initialize the client” with python, where do I put the code?
Hi,
Sorry, I don’t really understand.
So the general idea is that you make a connection between the N8N which runs on your Mac directly via npm and the qdrant that runs in your docker.
The qdrant has port 6333 exposed on “localhost”
You should be able to do something like http(s)://localhost:6333/dashboard to see if it’s running.
Also in there you can create a collection and create an API key
Once that is done. You go into your n8n and add a qdrant node and create credentials (it will need the API key and the URL which is the localhost thing from above without the /dashboard)
There is no direct python code needed to accomplish any of this.
Most of this is from memory so you need to see if it’s fits your puzzle 
Reg
J.
1 Like
Hi J.,
Thank you for your patience, I figured it now out!
- n8n locally up and running
- qdrant locally up and running
I needed to configure the Vectors Configuration to 3072 due to openAi model.
Thanks to my workflow I uploaded a simple instruction manual in PDF.
When I execute the Chatbot, I only get generic openAi answers not linked to uploaded PDF in qdrant vector store, even I configured to “retrieve”… Any suggestions why it might not work? Due to local installation? Wrong Vectors Configurations (Dot, Cosine,…)?
Step2, detailed screenshots:
Hi,
Text-embedding-small needs something like 15xx size
Large might need 3072 as you configured. I think you are mixing and matching size
Also not sure if you can inject pdf directly or you need to extract the data first
Regards
J