I’m working on a workflow that uses Claude through the AI node (Basic LLM Chain) to process Slack feedback data and generate a structured product report. The workflow runs successfully, but I’ve noticed the AI-generated report is getting cut off partway through. Specifically, it stops mid-sentence or mid-section, and large portions of the expected output are missing.
I’m not sure if this is an output length limitation, token restriction, or an issue with how the AI node handles longer prompts and outputs. I’d really appreciate any help troubleshooting this!
• | n8n version: 1.77.4 | |
---|---|---|
• | Source Code: GitHub - n8n-io/n8n: Fair-code workflow automation platform with native AI capabilities. Combine visual building with custom code, self-host or cloud, 400+ integrations. | |
• | License: Sustainable Use License + n8n Enterprise License | |
• | Instance ID: 26779b8834c01b5163e28e7fc52849b1bce33971f687afd672a0725f96e93878 | |
• | Running via: Web view | |
• | Database: Default (SQLite) | |
• | n8n EXECUTIONS_PROCESS setting: Default (own) | |
• | Operating system: Mac OS 15.4.1 |