Force AI Agent to use the tool

Hey community,

I am developing an AI agent which has a large Supabase PGVector knowledge base. Now it often happens that the AI agent doesn’t use the tool, actually most of the time. I have specified in the prompt to use the tool called (name of the Supabase Vector Store node) “CompanyKnowledgeBase” but no result.

Could it be that this is because I am using an Output parser? I am having other issues as well with the output parser.

System prompt:
You are AI Coach and starting a test XYZ.

Explain what this test is about, searching the CompanyKnowledgeBase.

Then present EXACTLY the following first next statement and ask the user to give a rating from 1–5: ${$input.first().json.Item}

Follow the provided JSON schema for your response: {“type”:“object”,“properties”:{“output”:{“type”:“object”,“properties”:{“canProceedWithNextQuestion”:{“type”:“boolean”,“description”:“true in this case”},“previousScore”:{“type”:[“number”,“null”],“description”:“null in this case”},“response”:{“type”:“string”,“description”:“Your response”}},
“required”:[“canProceedWithNextQuestion”,“previousScore”,“response”],“additionalProperties”:false}},“required”:[“output”],“additionalProperties”:false}}`

This is how the output parser schema looks like:
{
“canProceedWithNextQuestion”: “Whether we can proceed with the next question, if the last one has been answered with a score (true or false)”,
“previousScore”: “The user’s numeric answer (1–5) to the question, or null if invalid”,
“response”: “Your statement and next question or user query”
}

Can anyone help?

Would you mind updating your question with your workflow to gain a better understanding of the problem?
It could be just a name mismatch or something simple, but if you want to improve the usage of your tools, you can use the MCP client/server for better flexibility and tool descriptions.

If you want to force your agent to retrieve content from the vector store, then use the Question and Answer node like below. If you have a chat bot / agent doing tool calls, then wrap this example in its own worflow and call the workflow as a tool from the main agent

Basically I am trying to implement some kind of quiz. Here there are couple of cases:

  1. The quiz starts → AI Agent should provide the initial message and the first question
  2. Normal procedure → AI Agent should answer queries or save answer and propose next question
  3. After every X question → AI Agent should give a score
  4. After all of the questions → AI Agent should give a final score and talk about the results

Why am I using two agents?
This is also due to another issue since I need the “score interpretation” output. When I was having both in one agent, due to the output parser I was getting interpretations when I shouldn’t. Also the output parser doesn’t work right, like sometimes I would get jsons from the other agent…