IA Node only uses first row in Google Sheet Tool

Hello, I’m new here. So far I’ve learned a lot from this community and have used some tipps in building my first couple n8n automations.

Now I am building a agent helping me to curate posts from news outlets that might be worth sharing with our clients. This agent just gives me a first indication, if this post is worth looking into. It recieves a list of topics with the description from the linked google sheets tool. But it only processes the first line. I have no idea how to get it check the summary against all rows.

As you can see from the input and google sheet list below, it should definitely match cisco.

Input

[
  {
    "title": "Virenscanner ClamAV: Große Aufräumaktion der Entwickler angekündigt",
    "link": "https://www.heise.de/news/Virenscanner-ClamAV-Entwickler-starten-Entruempelung-11087471.html",
    "pubDate": "2025-11-21T11:08:00.000Z",
    "content": "<section><a href=\"https://www.heise.de/news/Virenscanner-ClamAV-Entwickler-starten-Entruempelung-11087471.html\"><img src=\"https://heise.cloudimg.io/v7/_www-heise-de_/imgs/18/4/9/8/2/2/3/8/2025-11-21-ClamAV-Aufmacher-9167feb583fc4083.png?org_if_sml=1&amp;q=75&amp;width=450\" alt=\"ClamAV-Logo auf Hintergrund\"/></a><p>Entrümpelung beim Virenscanner ClamAV: Cisco lässt die Entwickler alte Signaturen rauswerfen, auch alte Docker-Images müssen gehen.</p></section>",
    "contentSnippet": "Entrümpelung beim Virenscanner ClamAV: Cisco lässt die Entwickler alte Signaturen rauswerfen, auch alte Docker-Images müssen gehen.",
    "summary": "Entrümpelung beim Virenscanner ClamAV: Cisco lässt die Entwickler alte Signaturen rauswerfen, auch alte Docker-Images müssen gehen.",
    "id": "urn:bid:4982238",
    "isoDate": "2025-11-21T11:08:00.000Z"
  }
]

Google Sheets

[
  {
    "row_number": 2,
    "topic": "Sicherheit",
    "description prompt": "Allgemeine News, Gesetzesänderungen und Kommentare zu Sicherheit im Telekommunikations und ISP Sektor"
  },
  {
    "row_number": 3,
    "topic": "Virenschutz",
    "description prompt": "Allgemeine News zu Virenschutz software, Anbietern von Virenschutzsoftwar, Gesetzesänderungen und Kommentare zu Virenschutzsoftware. Keine Leaks oder Breaches."
  },
  {
    "row_number": 4,
    "topic": "Cisco",
    "description prompt": "Alle Nachrichten zu Cisco"
  }
]

Output

[
  {
    "output": {
      "id": "urn:bid:4982238",
      "maybe_interesting": false,
      "themen": [],
      "confidence": 85,
      "reasoning": "The summary discusses a cleanup of the ClamAV virus scanner. While this is a security-related topic (\"Sicherheit\"), the description prompt for this topic is \"Allgemeine News, Gesetzesänderungen und Kommentare zu Sicherheit im Telekommunikations und ISP Sektor\". The summary does not specifically relate to the \"Telekommunikations und ISP Sektor\" and is more of a general product maintenance update rather than focused content for the specified sectors. Given the instruction to be rather restrictive, this summary does not sufficiently match the described audience interest."
    }
  }
]
  • n8n version: 1.118.1
  • Running n8n via (Docker, npm, n8n cloud, desktop app): default config Docker @ hostinger

Update to the latest n8n and AI Agent node, or an workaround with batch/multi-item support. This should allow your agent to check the summary against all rows from your Google Sheet, not just the first one.

It has been reported several times in multiple topics .

:slight_smile:

Have fun!

1 Like

Thanks Mate! I searched the docs, but always with google sheets as I suspected that was my problem.

After updating n8n it now uses all the rows that are provided by the tool.

I didn’t have to change anything. What do you mean by batch/multi-item support?

1 Like

Kind of an workaround using Loop , but since now your problem was solved, is not necessary!

Glad that the issue was solved just by updating your instance :slight_smile: .

P.S sometimes I do workarounds, just not to upgrade my n8n version to avoid unusual issues, so is “stable” for the 1.113 .

Cheers!

Yeah, it’s like updating npm packages. You might fix something here, but it might also break other stuff…

a follow up question: it works, and I’m trying to optimize it. It used to pass each topic to the LLM separately, I’ve combined them in a subworkflow to return just one string that can be appended to the prompt. This works, but it still does two calls to the LLM. Once with and once without the interesting topics list. I assume my prompt is flawed,

Any idea why?

This is the first run without the interesting topics

[
  {
    "messages": [
      "System: You are: the Researcher for our news outlet and your job is to take a quick glance if the summary matches our audiences interests. You can find a list of the topics in the \"interesting topics tool\". \n\nYour task:\ncheck if the summary matches any of the \"interesting topics\". Be rather restrictive, we only want the most relevant content. save all matched topics in the \"themen\" array and set the \"maybe interesting\" flag to true, else set it to false. Add your confidence rating in % and a very short reasoning\n\nIMPORTANT: For your response to user, you MUST use the `format_final_json_response` tool with your complete answer formatted according to the required schema. Do not attempt to format the JSON manually - always use this tool. Your response will be rejected if it is not properly formatted through this tool. Only use this tool once you are ready to provide your final answer.\nHuman: This is the Summary: Die Entwickler von IBM haben das Betriebssystem AIX gegen mögliche Angriffe abgesichert.\nThis is the post id: urn:bid:4982159"
    ],
    "estimatedTokens": 222,
    "options": {
      "google_api_key": {
        "lc": 1,
        "type": "secret",
        "id": [
          "GOOGLE_API_KEY"
        ]
      },
      "base_url": "https://generativelanguage.googleapis.com",
      "model": "gemini-2.5-flash"
    }
  }
]

and then the second time with the interesting topics

[
  {
    "messages": [
      "System: You are: the Researcher for our news outlet and your job is to take a quick glance if the summary matches our audiences interests. You can find a list of the topics in the \"interesting topics tool\". \n\nYour task:\ncheck if the summary matches any of the \"interesting topics\". Be rather restrictive, we only want the most relevant content. save all matched topics in the \"themen\" array and set the \"maybe interesting\" flag to true, else set it to false. Add your confidence rating in % and a very short reasoning\n\nIMPORTANT: For your response to user, you MUST use the `format_final_json_response` tool with your complete answer formatted according to the required schema. Do not attempt to format the JSON manually - always use this tool. Your response will be rejected if it is not properly formatted through this tool. Only use this tool once you are ready to provide your final answer.\nHuman: This is the Summary: Die Entwickler von IBM haben das Betriebssystem AIX gegen mögliche Angriffe abgesichert.\nThis is the post id: urn:bid:4982159\nAI: Calling Call 'Marketing / Get interesting topics' with input: {\"id\":\"49aa2c65-1055-45ca-bb46-81449d12b0ed\"}\nTool: [{\"interesting topics\":\"- Sicherheit: Allgemeine News, Gesetzesänderungen und Kommentare zu Sicherheit im Telekommunikations und ISP Sektor\\n- Virenschutz: Allgemeine News zu Virenschutz software, Anbietern von Virenschutzsoftwar, Gesetzesänderungen und Kommentare zu Virenschutzsoftware. Keine Leaks oder Breaches.\\n- Cisco: Alle Nachrichten zu Cisco\\n- IBM: Alle Nachrichten zu IBM\"}]"
    ],
    "estimatedTokens": 349,
    "options": {
      "google_api_key": {
        "lc": 1,
        "type": "secret",
        "id": [
          "GOOGLE_API_KEY"
        ]
      },
      "base_url": "https://generativelanguage.googleapis.com",
      "model": "gemini-2.5-flash"
    }
  }
]
1 Like

The AI Agent first runs to decide if it needs to use a tool (in your case, the “interesting topics” subworkflow).

If it decides to use the tool, it calls the tool, receives the result, and then runs the LLM again with the new information (now including the topics list).

This results in two LLM calls: one before the tool is used (to decide if the tool is needed), and one after the tool returns data (to generate the final output with the tool’s result). This is a standard pattern for agent as far I am aware of…

Anyway, ‘improving’ maybe , repeat maybe if you want to avoid the double call, you would need to restructure your workflow so that the topics are fetched and appended to the prompt before the AI Agent node is called, rather than as a tool within the agent. This way, the agent receives all necessary information in a single call.

But that means restructuring your workflow, and going a bit “backwards“ as n8n build this…

Your choice my friend !

Cheers!

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.