Chat trigger has no output after sending message

Describe the problem/error/question

I post a chat message but the “output” section of the “When chat message received” node is empty and causes the downstream AI Agent node to fail because there is no available input.

It did seem to work w/ just the calculator and youtube tools on the AI Agent. After adding the HTTP Request Tool it stopped working, but I’m not sure if that’s related or not.

The Chat Node shows one item, but it’s empty:

What is the error message (if any)?

When I try to run the AI tool agent the issues here show up:

Issues:

  • Parameter “Prompt (User Message)” is required.
  • Parameter “Text” is required.
  • Parameter “Prompt (User Message)” is required.
  • Parameter “Text” is required.

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

{
“nodes”: [
{
“parameters”: {
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.chatTrigger”,
“typeVersion”: 1.1,
“position”: [
-400,
-100
],
“id”: “717a9cdc-ad52-4d75-bce7-df5967d64979”,
“name”: “When chat message received”,
“webhookId”: “5348f55d-e7e0-41d3-b394-145cdbd4b1d6”
},
{
“parameters”: {
“promptType”: “=You are an intelligent personal assistant. Your role is to use Youtube APIs to satisfy the following request:\n\n{{ $json.chatInput }}”,
“text”: “”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.agent”,
“typeVersion”: 1.7,
“position”: [
0,
-100
],
“id”: “cb1a9602-6aa4-4c92-9c76-1a7223f3f84f”,
“name”: “AI Agent”
},
{
“parameters”: {
“model”: “qwen2.5:7b-8k”,
“options”: {}
},
“type”: “@n8n/n8n-nodes-langchain.lmChatOllama”,
“typeVersion”: 1,
“position”: [
-40,
80
],
“id”: “ace03fb5-0afe-4333-9ef4-d05aa66af63d”,
“name”: “Ollama Chat Model”,
“credentials”: {
“ollamaApi”: {
“id”: “GoOYXY8f0AzN01pf”,
“name”: “Ollama account”
}
}
},
{
“parameters”: {},
“type”: “@n8n/n8n-nodes-langchain.memoryBufferWindow”,
“typeVersion”: 1.3,
“position”: [
80,
80
],
“id”: “0eb193df-d127-4d7f-b44e-c1fddcb51187”,
“name”: “Window Buffer Memory”
},
{
“parameters”: {},
“type”: “@n8n/n8n-nodes-langchain.toolCalculator”,
“typeVersion”: 1,
“position”: [
260,
120
],
“id”: “03e837b2-1730-4287-b8fe-73cc548a79a7”,
“name”: “Calculator”
},
{
“parameters”: {
“resource”: “videoCategory”,
“regionCode”: “US”,
“returnAll”: “={{ /n8n-auto-generated-fromAI-override/ $fromAI(‘Return_All’, ``, ‘boolean’) }}”
},
“type”: “n8n-nodes-base.youTubeTool”,
“typeVersion”: 1,
“position”: [
560,
140
],
“id”: “c74c0c69-46f3-480c-a305-5fef3cf33769”,
“name”: “YouTube Video Category”,
“credentials”: {
“youTubeOAuth2Api”: {
“id”: “Ip3U0whRHOzI55MM”,
“name”: “YouTube account”
}
}
},
{
“parameters”: {
“resource”: “video”,
“returnAll”: “={{ /n8n-auto-generated-fromAI-override/ $fromAI(‘Return_All’, ``, ‘boolean’) }}”,
“filters”: {
“q”: “={{ /n8n-auto-generated-fromAI-override/ $fromAI(‘Query’, ``, ‘string’) }}”
},
“options”: {}
},
“type”: “n8n-nodes-base.youTubeTool”,
“typeVersion”: 1,
“position”: [
420,
140
],
“id”: “842650df-8b20-43ba-81a8-53f7618ff7b5”,
“name”: “YouTube Video”,
“credentials”: {
“youTubeOAuth2Api”: {
“id”: “Ip3U0whRHOzI55MM”,
“name”: “YouTube account”
}
}
},
{
“parameters”: {
“toolDescription”: “Use this tool to retrieve all comment threads associated with a particular video. The request’s {videoId} parameter identifies the video.\n\nWhat you specify for {part} determines what portions of data is returned.”,
“url”: “https://youtube.googleapis.com/youtube/v3/commentThreads”,
“authentication”: “predefinedCredentialType”,
“nodeCredentialType”: “youTubeOAuth2Api”,
“sendQuery”: true,
“parametersQuery”: {
“values”: [
{
“name”: “part”,
“valueProvider”: “fieldValue”,
“value”: “{part}”
},
{
“name”: “maxResults”,
“valueProvider”: “fieldValue”,
“value”: “{maxResults}”
},
{
“name”: “textFormat”,
“valueProvider”: “fieldValue”,
“value”: “{textFormat}”
},
{
“name”: “videoId”,
“valueProvider”: “fieldValue”,
“value”: “{videoId}”
},
{
“name”: “searchTerms”,
“valueProvider”: “fieldValue”,
“value”: “{searchTerms}”
}
]
},
“placeholderDefinitions”: {
“values”: [
{
“name”: “part”,
“description”: “The part parameter specifies a comma-separated list of one or more commentThread resource properties that the API response will include. The possible values are: id, replies, snippet”,
“type”: “string”
},
{
“name”: “maxResults”,
“description”: “The maxResults parameter specifies the maximum number of items that should be returned in the result set. Acceptable values are 1 to 100, inclusive. The default value is 20.”,
“type”: “number”
},
{
“name”: “textFormat”,
“description”: "Set this parameter’s value to html or plainText to instruct the API to return the comments left by users in html formatted or in plain text. The default value is html. Acceptable values are: html or plainText ",
“type”: “string”
},
{
“name”: “videoId”,
“description”: “The videoId parameter instructs the API to return comment threads associated with the specified video ID”,
“type”: “string”
},
{
“name”: “searchTerms”,
“description”: “searchTerms is an optional parameter that instructs the API to limit the API response to only contain comments that contain the specified search terms.”,
“type”: “string”
}
]
},
“optimizeResponse”: true
},
“type”: “@n8n/n8n-nodes-langchain.toolHttpRequest”,
“typeVersion”: 1.1,
“position”: [
800,
140
],
“id”: “dc691cc9-92c3-4300-9d6b-49757d5518c9”,
“name”: “Youtube Top Level Comments”,
“credentials”: {
“youTubeOAuth2Api”: {
“id”: “Ip3U0whRHOzI55MM”,
“name”: “YouTube account”
}
}
}
],
“connections”: {
“When chat message received”: {
“main”: [
[
{
“node”: “AI Agent”,
“type”: “main”,
“index”: 0
}
]
]
},
“Ollama Chat Model”: {
“ai_languageModel”: [
[
{
“node”: “AI Agent”,
“type”: “ai_languageModel”,
“index”: 0
}
]
]
},
“Window Buffer Memory”: {
“ai_memory”: [
[
{
“node”: “AI Agent”,
“type”: “ai_memory”,
“index”: 0
}
]
]
},
“Calculator”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“YouTube Video Category”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“YouTube Video”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
},
“Youtube Top Level Comments”: {
“ai_tool”: [
[
{
“node”: “AI Agent”,
“type”: “ai_tool”,
“index”: 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: “0169febd24d7480961938a4072c5359fc3e91cbb6c8e783eb07e7879a4317d5f”
}
}

Share the output returned by the last node

The output of the chat trigger node is this:

[
{}
]

Information on your n8n setup

  • n8n version: 1.78.1
  • Database (default: SQLite): Whatever is default (i guess SQLite)
  • n8n EXECUTIONS_PROCESS setting (default: own, main): Dunno what it means, but I guess it’s default then.
  • Running n8n via (Docker, npm, n8n cloud, desktop app): npm
  • Operating system: Win11

Debug Information:

Debug info

core

  • n8nVersion: 1.78.1
  • platform: npm
  • nodeJsVersion: 20.18.3
  • database: sqlite
  • executionMode: regular
  • concurrency: -1
  • license: community
  • consumerId: unknown

storage

  • success: all
  • error: all
  • progress: false
  • manual: true
  • binaryMode: memory

pruning

  • enabled: true
  • maxAge: 336 hours
  • maxCount: 10000 executions

client

  • userAgent: mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/133.0.0.0 safari/537.36
  • isTouchDevice: false

Generated at: 2025-02-21T08:20:39.809Z

1 Like

Hello, did you try using other models like OpenAi’s or Anthropic’s ? They are really good with tools !
If it works with these models, try refining the descriptions of your tools and try adding a system prompt with xml formatting for example. Also, recommend this prompt engineering tips by OpenAI.
Hope this helps !

I’m trying to do an Ollama solution so I can’t use those 3rd party APIs.

I did notice that by disconnecting the the HTTP Request Tool the chat trigger correctly sets the output. When I reconnect the tool, it become empty again. This seems like a bug to me. A tool on the next node shouldn’t affect whether the previous node has an output right?

Version : 1.79.3
Hey, I took your workflow and I think I figured out the issue : you used an expression in the field “Source for Prompts” but the ai agent node has only 2 options for “source for Prompts” field (conversations/define below), you need put the “Source for Prompts” as “Define Below” and to put your expression in the text field.
Your node :


Correct node :

Here is the workflow with the updated node (I just have openAI as llm but you can change it):

If it still doesn’t work, try updating n8n (as I’m on 1.79.3). Hope this will work !

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.