[LangChain] Problem with Pinecone Insert

Hi together,

i justed wanted to experiment with this template:
https://auto.lead2chat.de/templates/1960

But for some reason the pinecone node is not finding my Agent Index. I did the following setup:

Index-Name: allianz-tkv
METRIC: dotproduct
DIMENSIONS: 512
POD TYPE: starter


i recheck the enviroment settings and its the same i setup with the API-Key.

Anyone know what i’m doing wrong?

Hi Bastian, glad to hear you’re interested in trying the new AI nodes.
From what I can tell you filled-out “Pinecone Index” field incorrectly. Your index name is “allianz-tkv”.
The namespace can be empty, because it would try to look for a specific namespace in allianz-tkv index, so if you didn’t insert anything there, it’s most likely empty. You can view your namespaces in Pinecone after you click on the index and then navigate to “Namespaces” tab.

1 Like

Oh damn, was a long day and i wanted to test it out.

Its working now, but i had problems with the dimension first, as i didn’t know how many dimensions my pdf has. So i tried and it said it has 1536 Dimension and i just created a new index with this size.

I noticed a view problems:

  • context of the last questions/answer is not taken in consideration.
  • i had problem to find a reference which was describe in plural in my document but i ask it in singular (i.e. question: “is there a waiting period with an accident” document: “there are no waiting periods in case of accidents”, answer: “there is a waiting period of 3 months”. But i tried again and then it worked.)
  • default reply or if the question is too vague it gives just a super generic output

Is there a way to implement a memory like in the other templates, that i actually can get the context of the last question and answer? I think this would really help to improve it.

Are there any rule-sets we can give the “Chat OpenAI”?

Thanks for your good work, this makes it really easy to use!

Cheers,
Bastian

Glad to hear it worked!

The amount of dimensions is based on embedding. So if you used OpenAI(text-embedding-ada-002) it would indeed be 1536.

To answer your points:

  1. The chain itself doesn’t allow for memory. To have memory combined with QA you’d need to create an agent and provide it with a workflow tool. The workflow tool should point to a workflow which has QA chain set-up. We went over similiar use-case in the recet webinar on Langchain, you can find highlights here
  2. It’s unlikely it would fail because of plurals as it’s using embeddings to retrieve the relevant documents. Perhaps it was the case of model halucinating. It’s best practice to set model temperature to 0 when doing QA retrieval to reduce halucinations
  3. Can’t help with this as it highly depends on the retrieved context and your prompt, you need to experiment what works best for your use-case