Trouble with AI Agents

I am having a strange issue trying to use the AI Agent node (conversational agent). Any time I add more than two tools, the AI gets stuck in a loop and burns through all Max iterations and then the workflow stops due to max iterations being hit.
This only seems to happen when I add more than two tools, but it doesn’t matter if those tools are custom workflow triggers, HTTPs, custom code, etc. The moment a third tool is added, I get no output, just max iterations hit, and when I look at the execution log, it’s just a loop of the same input/output over and over again, eating through my tokens.

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.49
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): self hosted npm
  • Operating system: Windows 11
1 Like

An update on this, looking at the agent logs I am seeing where there may be an issue. I keep seeing these prompts (hardcoded prompts that I did not input) and they keep telling the AI that either their response was incorrect or there was an error -

Okay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.
AI: ```json
{
“action”: “respond”,
“action_input”: “I apologize for the confusion earlier. Based on the research I’ve gathered, trust plays a crucial role in various types of relationships. Multiple studies emphasize both cognitive and affective dimensions of trust in building relationships. The neuroscience of trust shows that building a culture of trust can significantly enhance productivity, collaboration, and employee loyalty. Research also highlights the importance of trustworthiness in qualitative research and emphasizes the significance of trust between research participants and researchers. Would you like me to elaborate on any specific aspect of trust in relationships?”
}

Human: TOOL RESPONSE:
---------------------
respond is not a valid tool, try another one.


Human: TOOL RESPONSE:
---------------------
There was an error: "There was an error: "The workflow did not return an item with the property 'commit'""

USER'S INPUT
--------------------

Okay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.
AI: ```json
{
"action": "commit_data",
"action_input": {
"type": "research",
"commit": "Trust plays a crucial role in various types of relationships, according to research. Key findings include:\n1. Trust impacts attachment styles, relationship patterns, and the likelihood of relational success.\n2. Establishing trust takes time and effort in most relationships.\n3. Trust is naturally established between parents and children, spouses, partnerships, and friendships.\n4. Trust in relationships with parents may influence future health and relational developments.\n5. Lack of trust and adverse childhood experiences can contribute to unhealthy outcomes in the future.\n6. Children who lack trust with parents may be less likely to engage in relational developments with other children.\n7. Trust is built by recognizing and responding to one's negative affect, which fosters attunement and comfort in discussing conflict and stress.\n8. The Adverse Childhood Experiences Study (ACES) showed that children who experienced conflicts such as abuse, neglect, or familial conflict were more likely to develop depressive symptoms, substance abuse issues, and increased antisocial behavior.\n9. A Harvard study found that adults who described their parental relationships as lacking warmth and closeness were 45% more likely to have developed serious medical illnesses in mid-life.\nThese findings highlight the significant impact of trust on relationship dynamics, mental health, and long-term well-being."
}
}

Human: TOOL RESPONSE:

There was an error: "There was an error: “The workflow did not return an item with the property ‘commit’”

Hey @Anthony_Lee,

Tool selection can be a tricky one sometimes, The Model you are using needs to be smart enough to know that it needs to select the tool to use. I would have thought Claude would be smart enough to know to select the tool but what I am not sure about is if the data will go from one tool to another which may be what you are trying to do.

@oleg do you have any ideas on this one?

Unfortunately, even down to two tools I cannot get this to work. Claude keeps inventing tools that are not listed in the prompt, triggering the error that “this is not a valid tool”. And GPT-4o keeps getting rate limited because the output keeps getting flagged as not being in the proper format. I will need to build this without using the agent by making the API calls to pull in context, but trying to determine how to loop it so that it can still be a chat back and forth.

This is a bit frustrating as I am trying to build out something for a client. I just want a chatbot that has access to a couple of API calls; one vector database (qdrant with a handful of collections) and one to Perplexity. But even when I define exactly what tools are available and how to structure the output/input for the tool, it keeps messing up.

Interesting, what happens if you use a ‘Tools Agent’ instead of the conversational one? Also, as Jon mentioned, can you try with a different language model? I am using OpenAI’s gpt-4o-mini and it’s pretty good for what I need. I use about 12 tools right now.

GPT-4o does seem to understand how to use the tools better, but it keeps giving an “improper output” and that eats up extra tokens and I keep getting rate limited.
I’ve had this issue with the tools agent as well. It has something to do with the internal prompting that asks the LLM to output a specific output. it seems that every time it tries to spit out markdown or something, the internal prompt gets mad and says that the output isn’t a proper json or something like that, so it has to go back and forth a few times and this causes me to get rate limited as this is a relatively new account on OpenAI

Hey @Anthony_Lee

Some observations on your tool descriptions:

Example 1:

Call this tool to get context from a vector database that will assist in writing the book The Evolved Man.

For me, this is a classic case of focusing too much on the “how” (see my previous response to this). What response should the agent expect from this tool? How can it determine that the response was a success? If it can’t, it’ll just keep retrying/asking and you’ll get stuck.
In this scenario, I try to put myself in the virtual shoes of the agent:

  • Why should I use this tool? it helps the “user” write their book.
  • When should I use this tool? when user is mentioning/asking about something that was discussed previously/in the past.

Try this or some variation of:

Call this tool to search for past context that was saved whilst helping the user write their book.

Example 2:

Call this tool to make an API call to Perplexity AI that will do a real-time web lookup for research papers on a topic.

Becareful of keywords in your tool description as these also act as trigger words for tool use:

  • “Perplexity AI” - did the user specifically request to use Perplexity?
  • “real-time web lookup” - did the user specifically request for real-time or does it matter if it’s slightly delayed?
  • “research papers” - did the user specifically ask for research papers or just wants the summary of the research?

Sometimes being overly specific or service-dependent means the tool won’t be used or used incorrectly.

Try this instead:

Call this tool to research a topic suggested by the user.

  • “research a topic” - doesn’t care about what service is being used, focuses on purpose; just send the topic to be researched.
  • “suggested by the user” - only trigger this when the user mentions or requests for it or when it is a good time to do so. Good for limiting ways extraneous calls.

Overall I think you have quite an ambitious project on your hands but don’t give up, I’ll be cheering you on!

I cooked up this example workflow to hopefully help you debug your tool problem.



7 Likes

I truly appreciate your effort here. While I was unable to use your in-memory vector (we have a very important first draft available in the current vector db) it looks like your simplified prompts did the trick!

Thank you.

1 Like

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.