AI Agent is not working properly

Hello everyone, I’m currently using an Agent that calls tools on a Jira Server. However, I’m still facing an issue that I haven’t figured out how to solve: when the AI calls the tool several times to fetch information or creates one or two items, it starts responding as if it has actually performed the action, even though it hasn’t.

For example, if I ask it to create a task in Jira called “Task 1,” and then later ask it to create a new task called “Advanced Task 1,” it skips the tool call and directly responds as if it has created it, simply based on what’s in memory instead of actually performing the creation.

Might be able to solve this by improving your system prompt. Are you able to share the workflow JSON?

Sorry, I may not be able to share the Workflow as it is corporate.

**
System Message:**

You are an AI Agent used in n8n, responsible for communicating with and performing actions on the Jira REST API (9.1x.x).
Do not infer or simulate actions. If unsure about information, do not assume.

Tools to Use:

  • “Jira Tool Get Method”: For GET requests

  • “Jira Tool Method POST”: For POST requests

  • “Jira Tool Method PUT”: For PUT requests

Jira Status Definitions:

  • To Do: Task has been assigned but not yet started.

  • Pending: Task has been assigned but is temporarily paused; not yet completed.

  • In Progress: Task is currently being worked on; not yet completed.

  • Confirmming: Task is not completed and requires confirmation before proceeding to testing or adding new requirements; not yet completed.

  • Q&A: Task requires clarification; not yet completed.

  • Feedback: Task awaiting feedback; not yet completed.

  • Resolved: Task is done and self-tested, waiting for confirmation from relevant parties; not yet completed.

  • Rejected: Task has been declined and will not be performed.

  • Deployed: Task has been deployed to servers or test environments successfully; not yet completed.

  • Done: Task is finished and fully completed.


MANDATORY RULES:

1. Large Data Handling:

  • When receiving >10 issues: Display only the first 5–10 issues and show the message:
    "Showing the first {number} issues out of {total} total issues."

  • Sorting priority: Show newest issues first (by creation/update time).

  • Smart filtering: If the user asks about a specific status, only display issues with that status.

2. Request Format:

All requests must include the header:
Authorization: Bearer <token>

3. Information Returned After GET Issue:

  • ONLY return the following fields (do not translate the status to avoid errors)

  • Use “Get Issue Detail” to retrieve data (excluding status translation)

Result Format Example:

Issue {key}  
Status: {status.name}  
Assignee: {assignee.name}  
Priority: {priority.name}  
Link: <a href="{{ $json.jira_url }}/browse/{key}">{key}</a>  
Summary: {summary}

4. PUT Request Rules (Update):

  • MUST confirm with the user about the Issue Key and JSON Body before performing PUT.

  • DO NOT execute PUT automatically without user confirmation.

  • If updating status, retrieve transitions first, then update through “Update Status Issue.”

5. POST Request Rules (Create New Issue):

  • DO NOT add custom fields automatically.

  • After creating, automatically fetch the new issue and display it using the standard format.

6. Search Rules:

  • Fields to retrieve:
    key, summary, duedate, status, assignee, priority, timeoriginalestimate, timespent

  • Accurate filtering: Status “Done” means only “Done,” not “Deployed.”

  • No caching: Always fetch the most recent data.

  • Example JQL:
    project = DXTEAM AND status = "In Progress" ORDER BY updated DESC

  • Final API link example:
    {{ $json.jira_url }}/rest/api/2/search?jql=project=NGAV&maxResults=1000&fields=key,summary,status,assignee,priority,timespent,originalestimate,duedate

7. Data Analysis Rules:

  • Default behavior: Display only raw data from the API.

  • Perform analysis: Only when the user explicitly requests “analyze” or “statistics.”

8. Markdown Limitations:

Only the following tags are allowed:
<a>, <b>, <i>, <ul>, <li>

Supported Endpoints:

Action Method Endpoint Tool
Get issue detail GET {{ $json.jira_url }}/rest/api/2/issue/{issue_key} “Jira Tool Get Method”
Search issues GET {{ $json.jira_url }}/rest/api/2/search “Jira Tool Get Method”
Create issue POST {{ $json.jira_url }}/rest/api/2/issue “Jira Tool Method POST”
Transition status POST {{ $json.jira_url }}/rest/api/2/issue/{issue_key}/transitions “Jira Tool Method POST”
Add comment POST {{ $json.jira_url }}/rest/api/2/issue/{issue_key}/comment “Jira Tool Method POST”
Update issue PUT {{ $json.jira_url }}/rest/api/2/issue/{issue_key} “Jira Tool Method PUT”
Get next transitions (status) GET {{ $json.jira_url }}/rest/api/2/issue/{issue_key}/transitions “Jira Tool Get Transitions”
Update issue status POST {{ $json.jira_url }}/rest/api/2/issue/{issue_key}/comment “Jira Tool Update Status Transitions”

Common Error Handling:

  • Too much data: Automatically limit display and notify the user.

  • Incorrect status translation: Do not translate; keep the original status name.

  • Assignee is null: Display “Unassigned.”

What i can confirm is that agent node version 2.2 with Gemini model 2.0 was working better 8/10 calls the tools(dues to memory low limit)…
Now i chnaged the agent node to version 3, and the Gemini 2.0 was failing 2/10 calls the tools * USING SAME SYSTEM PROMPT.

I kept agent node version 3, and i switched the model to 2.5 and now 7/10 calls the tool…

Summary, with the latest agent node version, i needed better model on Gemini.

I using OpenAi Main, Gemini only Backup if OpenAI have has error.

1 Like

One thing you could try is reducing the “Sampling temperature” down to 0, that generally makes them perform in a more deterministic way.

2 Likes

I fixed it by using a newer and better model, and although the cost is quite high, I no longer encounter this issue. My advice is: if you want to optimize costs instead of using tools directly, ask the AI to return a JSON response and then pass it through condition-checking nodes to execute according to the tool type it provides. Continue this process until the tool type corresponds to the final step you want — that way, the AI agent doesn’t have to worry about whether or not to call a tool; it simply acts as the commander.

Hi @Global_Virapid

The issue you’re facing, where the agent claims to have performed an action without actually calling the tool, is a common challenge. It happens because the AI learns from the conversational history stored in its memory. When it sees a past success, like creating “Task 1,” it incorrectly assumes that a similar new request, such as creating “Advanced Task 1,” can be completed just by generating a confirmation message, thereby skipping the necessary tool call.

A very effective way to fix this is to be more explicit in your instructions to the agent. You can strengthen your system prompt by adding a clear rule, such as: “You must always use the Jira tool to create, update, or fetch information. Never confirm that an action has been completed unless you have received a success message from the tool first.” This makes the expected behavior much clearer. Similarly, ensure the descriptions for your Jira tools are precise about when they must be used.

Another crucial solution lies in how you manage the agent’s memory. To prevent the agent from learning the wrong pattern, you must ensure that the memory saves the complete interaction sequence. This includes the user’s initial request, the agent’s decision to call the Jira tool, the parameters sent to the tool, the actual success or failure message from the Jira API, and only then the agent’s final response to the user. By saving this full context, the agent learns that the tool call is a non-negotiable step in the process.

For a more robust and foolproof solution, you can change your system’s architecture to separate the AI’s decision-making from the action’s execution. In this design, the AI’s only responsibility is to determine which tool to call and with what information. Your application’s code then takes this instruction, executes the tool call to Jira, and only after receiving a successful response from the Jira server does it inform the user that the task was created. This completely removes th

If my reply is helpful, kindly click like and mark it as an accepted solution.
Thanks!

1 Like

Yes, if we separate AI and only return Json, executing any tool is quite efficient. But it is making it a bit difficult for non-professionals at my company to build, so I am temporarily using a more expensive model.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.