Sentiment analysis mostly works, sometimes not - with local Ollama

Describe the problem/error/question

I’ve built a sentiment analysis that checks a short string / headline (max. ca. 8 - 10 words) for a sentiment and then returns positive / neutral / negative.
I’ve connected it with my local Ollama 3.2
When I run it on a set of 10 test strings in a loop, it’s working multiple times and then one time it suddenly not working anymore.
Running it again, might make it work again.

What is the error message (if any)?

Error during parsing of LLM output, please check your LLM model and configuration

Error details
Other info

Item Index

0

Node type

@n8n/n8n-nodes-langchain.sentimentAnalysis

Node version

1 (Latest)

n8n version

1.90.2 (Self Hosted)

Time

17/05/2025, 12:33:16

Stack trace

NodeOperationError: Error during parsing of LLM output, please check your LLM model and configuration at ExecuteContext.execute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/chains/SentimentAnalysis/SentimentAnalysis.node.js:212:17) at processTicksAndRejections (node:internal/process/task_queues:95:5) at WorkflowExecute.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:681:27) at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:915:51 at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/execution-engine/workflow-execute.js:1246:20

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 1.90.2
  • Database (default: SQLite): Docker AI starter kit - not sure, Postgres or SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker (AI starter kit)
  • Operating system: Ubuntu 25

with ollama, which model are u using, have u seen more consistant result with like gpt models? I find unless you have like 32gb gpu the models don’t respond consistenly enough.

llama3.2 - the 3b version on a Laptop without a dedicated GPU.

qwen3:4b seems to be performing better.

The issue is, that it is a thinking model and consumes a lot of CPU - on CLI I can use /nothinking and thinks are working faster.

Can’t I use any option to make it run on my laptop?

16 × AMD Ryzen 7 PRO 6850U with Radeon Graphics - 32GB RAM and still useless for AI?
Actually, if I ask it to generate some text, it’s working quite well.
What is different from Text to sentiment?

Is there an option to see the output of the model in n8n?

not bad spec tbh, but am not sure if ure using GPU acceleration, there maybe away around it with ollama, but tbh I’ve been testing it out and I see really bad behaviour with ollama local models using llama3.2, as soon as a switch to open ai works without headache, text generation is much simpler but it needs GPU to really provide good output.

as soon as I use openai


works first time, no issues.

I managed to get it working by using a Loop and Wait node with some specific settings