Retrieval with Pinecone not getting information in prompt

Hey !

I have a sub-workflow with a Q&A chain to retrieve data from a Pinecone Vector store. When the workflow is triggered, the retrieval works well (i.e. the result is retrieved from the Pinecone store) but it is not passed onto the subsequent model prompt. Therefore, the output is useless as the model does not have the retrieved data


Any idea what i’m missing or a workaround to do this ?

Thanks

Information on your n8n setup

Cloud setup

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I found a workaround by doing everything through HTTP calls, but it’s painful and hard to maintain in the long run. Can anybody from the N8N team check it out (cc @Jon ) as it looks like a bug to me ?

Thanks !!

Hey @Arnaud_Fournier,

Welcome to the community :tada:

I have just taken a look at this and I have not been able to reproduce the issue, It could be worth making sure you are using the OpenAI Chat Model and not the normal OpenAI Model node but other than that I used the example workflow below to test it was working.