How to bypass Chat Model OpenAI processing for prepared responses from a code tool in node AI Agent?

Hello,

I have implemented a system where a query is first searched in vector storage. If no response is found there, a predefined phrase from the “code tool” is used. However, I’ve encountered an issue: the prepared phrase from the “code tool” is being further processed by the “ChatModel OpenAI,” which alters its content.

How can I ensure that the prepared response from the “code tool” is sent directly to the user without being processed or modified by the “ChatModel OpenAI”?

Thank you for your assistance!

Information on your n8n setup

  • n8n version: - 1.75.2
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app): - Docker
  • Operating system: - Ubuntu

Hi @walker

So you’re basically trying to use the code tool as a fall back if nothing was found in the vector store, correct?

It may be that this works better if you put this fallback logic in the system prompt of the AI Agent directly (though that may not work very reliably depending on the model).

You could also try to implement a different architecture all together, where you basically use a simple LLM chain instead to receive the user input and then follow up with a simple vector store call followed by an if node (or similar) to determine if the vector store search result is sufficient and use another LLM chain if so to respond to the user or use the fallback code node.

So something like this:

(sorry for the bad quality, was in a rush)

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.