I have a sub-workflow with a Q&A chain to retrieve data from a Pinecone Vector store. When the workflow is triggered, the retrieval works well (i.e. the result is retrieved from the Pinecone store) but it is not passed onto the subsequent model prompt. Therefore, the output is useless as the model does not have the retrieved data
I found a workaround by doing everything through HTTP calls, but it’s painful and hard to maintain in the long run. Can anybody from the N8N team check it out (cc @Jon ) as it looks like a bug to me ?
I have just taken a look at this and I have not been able to reproduce the issue, It could be worth making sure you are using the OpenAI Chat Model and not the normal OpenAI Model node but other than that I used the example workflow below to test it was working.