Support injecting embedding metadata in Question and Answer Chain

Hi,
I need to be able to include the metadata of the matched embeddings in the context that is provided to the LLM. This current System Prompt Template of Question and Answer Chain allows the answer prompt to be customized but the only dynamic aspect is the content.

Let me explain a common scanrio which can only be handled cleanly by including metada in the answer prompt. I have product descptions as the embedding text, and each product has a sku_code. The user should be able to search product by product name or decription. Adding the sku_code to the embedding will result in embedding similarity match failure as the user is not entering the sku_code in the search term. However once the top n matches are found, if n8n supports injecting metadata fields for each match, then the LLM can have this information in context and be able to produce structured output with sku_code of the matches.

Here’s a relevant ChatGPT conversation that covers this in detail : ChatGPT - Embedding SKU Ignorance

I’m looking to migrate from custom Langchain based code to n8n for our organization and this is the one blocking issue I have from migrating our use case to n8n

Thanks!

1 Like