Trouble Referencing LLM output

Describe the problem/error/question

I am working on my first project and would appreciate some help. I am making an AI docker log analyzer. I have an output from my LLM, one per container, (currently working with one container to streamline debugging). In this node I am trying to re-associate the container name with each output from the LLM. However, I get an error message (indicated below). I can tell it is with the output from the Analyze with Ollama node since I deleted the everything else and still got the error. Any help would be much appreciated .

What is the error message (if any)?

A ‘json’ property isn’t an object [item 0]

In the returned data, every key named ‘json’ must point to an object.

Please share your workflow

    {
      "parameters": {
        "method": "POST",
        "url": "http://X.X.X.X:11435/api/generate",
        "sendBody": true,
        "bodyParameters": {
          "parameters": [
            {
              "name": "model",
              "value": "ministral-3:3b"
            },
            {
              "name": "prompt",
              "value": "=You are a Docker log analyzer. You MUST analyze logs from this EXACT container: {{ $('Remove Duplicate Log Lines').item.json.containerName }}\n\nTotal containers to analyze: 1\n\nCRITICAL INSTRUCTIONS:\n1. You MUST create a section the containers listed above\n2. The container MUST appear in your analysis, even if it shows \"No issues detected\"\n3. Search the logs for the container name to find its specific log entries\n4. If a container has no log entries, state \"No logs found for this container in the time period\"\n5. No other container other than {{ $('Remove Duplicate Log Lines').item.json.containerName }} should be analysized\n\nUse this EXACT format for your anyalysis of the container:\n\n---\n## Container: [EXACT CONTAINER NAME FROM ABOVE]\n**Health Status:** [Healthy | Warning | Critical | No Issues Detected | No Logs Found]\n\n**Critical Issues:**\n[List specific errors found in THIS container's logs, or write \"None detected\"]\n\n**Warnings:**\n[List specific warnings found in THIS container's logs, or write \"None detected\"]\n\n**Notable Events:**\n[List important events from THIS container's logs, or write \"Normal operation\"]\n\n**Recommendations:**\n[List specific actionable items for THIS container, or write \"No action required\"]\n\n---\n\nContainer Logs:\n{{ $('Remove Duplicate Log Lines').item.json.data }}\n\nREMINDER: NO OTHER container other than {{ $('Remove Duplicate Log Lines').item.json.containerName }} should be analysized"
            }
          ]
        },
        "options": {
          "timeout": 360000
        }
      },
      "name": "Analyze with Ollama",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [
        1264,
        -16
      ],
      "id": "ebf88050-0323-451e-94f1-d592f1f7b161",
      "executeOnce": false
    },
    {
      "parameters": {
        "mode": "runOnceForEachItem",
        "jsCode": "\nconst containerName = $('Remove Duplicate Log Lines').item.json.containerName;\n\nconst analysisText = $('Analyze with Ollama').item.json.data;\n\nreturn [{\n  json: {\n    containerName: containerName,\n    analysis: analysisText,\n    timestamp: new Date().toISOString()\n  }\n}];"
      },
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [
        1712,
        -16
      ],
      "id": "2245e2b3-0bd7-47a9-b61b-58ed9be7d7c9",
      "name": "Extract Container Analysis"
    },

Share the output returned by the last node

[
{
“data”: “{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.139113355Z”,“response”:“—\n”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.191571596Z”,“response”:“##”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.240748094Z”,“response”:" Container",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.289616711Z”,“response”:“:”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.337938588Z”,“response”:" “,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.386881415Z”,“response”:“Pro”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.435376387Z”,“response”:“x”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.484083942Z”,“response”:“m”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.53272519Z”,“response”:“ox”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.581975003Z”,“response”:”_n",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.630751168Z”,“response”:“8”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.679761261Z”,“response”:“n”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.728595988Z”,“response”:"\n\n",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.777463292Z”,“response”:““,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.826324612Z”,“response”:“Health”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.875292016Z”,“response”:” Status",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.930740533Z”,“response”:":”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:48.973166341Z”,“response”:" “,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.021422521Z”,“response”:“Warning”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.069760655Z”,“response”:”\n\n",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.121659773Z”,“response”:““,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.170525797Z”,“response”:“Critical”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.219052482Z”,“response”:” Issues",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.267310709Z”,“response”:":\n”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.315907476Z”,“response”:“None”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.364324781Z”,“response”:" detected",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.41267402Z”,“response”:“\n\n”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:49.461513771Z”,“response”:“",“done”:false}\n{“model”:“ministral-3model”:"ministral-3-02-02T16:53:54.517113578Z”,“response”:" with",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:54.565050482Z”,“response”:" ports",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:54.613045725Z”,“response”:" \",\"done\":false}\n{\"model\":\"ministral-3:3b\",\"created_at\":\"2026-02-02T16:53:54.661211565Z\",\"response\":\"5\",\"done\":false}\n{\"model\":\"ministral-3:3b\",\"created_at\":\"2026-02-02T16:53:54.709293755Z\",\"response\":\"6\",\"done\":false}\n{\"model\":\"ministral-3:3b\",\"created_at\":\"2026-02-02T16:53:54.757241924Z\",\"response\":\"7\",\"done\":false}\n{\"model\":\"ministral-3:3b\",\"created_at\":\"2026-02-02T16:53:54.805530616Z\",\"response\":\"8\",\"done\":false}\n{\"model\":\"ministral-3:3b\",\"created_at\":\"2026-02-02T16:53:54.853325852Z\",\"response\":\"“,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:54.90166117Z”,“response”:” (“,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:53:54.949626683Z”,“response”:“main”,“done”:false}\n{“model”:“ministral-3onse”:” 26-02-02T16:53:58.365402959Z\",\"response\":\"t\",\"done\":false}\n{\"model\":\"ministr**6-02-02T16:54:01.348565379Z",“response”:" Work",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:54:01.397166978Z”,“response”:“flow”,“done”:false}\n{“model”:“ministral-**2T16:54:05.326592712Z”,“response”:" mode",“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:54:05.375298666Z”,“response”:“.\n”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:54:05.423444121Z”,“response”:“—”,“done”:false}\n{“model”:“ministral-3:3b”,“created_at”:“2026-02-02T16:54:05.471890258Z”,“response”:“”,“done”:true,“done_reason”:“stop”,“context”:[17,4568,1584,29478,2784,1045,1051,1045,1051,1066,46839,14256,7317,6850,5450,87334,1278,14256,7317,6850,27175,1524,1032,1053,1054,1055,1057,32891,10714,14056,1321,39184,6255,76559,18853,8898,34856,6949,1626,1052,1046,1603,100107,17616,22741,5450,75998,17616,1032,1051,1395,14656,1294,1278,6033,2663,1505,11024,1693,65914,1408,8977,7172,1626,8129],“total_duration”:18587356005,“load_duration”:308866848,“prompt_eval_count”:1517,“prompt_eval_duration”:934565161,“eval_count”:360,“eval_duration”:16945734671}\n”
}
]

Information on your n8n setup

  • n8n version: Version 2.4.8
  • Database (default: SQLite): default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): Docker
  • Operating system:

Hi @teadragon7

So what’s happening in your workflow i it’s two different behaviors colliding: how n8n expects data and how the Ollama API returns it.

The “A ‘json’ property isn’t an object” error pops up because somewhere in the flow, n8n is receiving an item where json isn’t actually an object. This usually happens before your Code node even runs — it’s about the format of the incoming data.

And here’s the real culprit:

Your Analyze with Ollama node is returning the response as a stream. You can see this clearly in the output you shared (it’s not a single JSON object). It’s a massive string with multiple JSON lines stacked together, each representing a “chunk” of the model’s response (done: false, done: true, etc.).

For Ollama, that’s normal. For n8n, that’s just plain text, not structured JSON.
So when you try to access something like:

$('Analyze with Ollama').item.json.data

You’re not getting a clean object with defined fields. You’re getting a raw string full of JSON chunks separated by \n. Any attempt to treat that as a “normal” object causes n8n to throw that error.

Easiest fix : If you don’t need streaming, just turn it off. Add stream: false to the body of your HTTP Request node. Once you do that, Ollama will return a single JSON object with the full response in the response field. Your Code node will work exactly as expected (no workarounds needed) .

If you want to keep streaming: Then you’ll need to treat the response as text and parse it manually (line by line) to extract just the response values and combine them.

Either way, just make sure whatever you return from your Code node is always an object inside json. That’ll clear up the structural error.

Hope that helps!

Thank you for the quick reply! I don’t particularly need a stream however I have been unable to add stream: false to the body of my HTTP Request node without getting the following error:

Bad request - please check your parameters

json: cannot unmarshal string into Go struct field GenerateRequest.stream of type bool

I have tried looking up a solution and trying all variations of ‘false’ “false” ect but I am unable to resolve the error. Do you have advice on this?

@teadragon7

You’re very close, the issue isn’t the value of false, it’s how the HTTP Request body is being sent.

In n8n, when you use Body Parameters (name/value pairs), everything is sent as a string. That’s why Ollama keeps receiving "stream": "false" and throws that error.

What to do:

Open your HTTP Request node, enable Send Body, and set Body Content Type to JSON. Don’t use Body Parameters. Instead, in the JSON Body field, send the full object:

json

{
  "model": "ministral-3:3b",
  "prompt": "={{ $json.prompt }}",
  "stream": false
}

Save and re-run the workflow.

After this, Ollama receives a real boolean, streaming is disabled, the response becomes a single JSON object, and your Code node can safely read $('Analyze with Ollama').item.json.response.

Trying different variations of "false" won’t work as long as you’re using Body Parameters — switching to a JSON body is the key.

Thanks again for the support. I was able to get stream: false working following your instructions however my original error still persists. My node is now:

const containerName = $('Remove Duplicate Log Lines').item.json.containerName;
const analysisText = $('Analyze with Ollama').item.json.response;

return [{
  json: {
    containerName: containerName,
    analysis: analysisText,
    timestamp: new Date().toISOString()
  }
}];

I tried using only $(‘Remove Duplicate Log Lines’).item.json.containerName and only $(‘Analyze with Ollama’).item.json.response. In both cases I got the same error

hi @teadragon7 I believe you don’t need a Code node, keep it simple,

You just need to parse the JSON using parseJson() in a Set node:

{{ $json.data.parseJson() }}

for exmaple:

After that, you can extract whatever you need easily..

Thank you for your advice. I tried that and when I tried to refer to the resulting objects in the following code I got the same error. However, I realized that it was an issue with that particular code node (shown in the original post) as the following code node was able to use the output of the llm from the HTTP Request.
With the new found success I took shot at processing all the items in the flow rather than paring down to one for the sake of debugging. This then resulted in another ‘A ‘json’ property isn’t an object [item 0] error in an earlier node that previously didn’t have this issue. See screenshot below:


This is the node that outputs into my “Analyze with Ollama” node.
Since I am running into this error repeatedly I was hoping that in addition to getting help in solving the error on this node in particular I could also get a general explanation of the error and why increasing the number of items being inputted into the node could cause this issue.

Thank you in advance.

I’m still confident that you can ignore the complexity of the Code node and replace it with a Set nodes, it’s much easier,

I don’t use Code nodes at all, except in rare cases or when working with an external library.

However if you insist, the error message is already telling you exactly what’s wrong:
image
And since you’re running the node in Run Once for Each Item mode, the return value should look something like this:

In general, I recommend reading more about data structures to understand the full picture: