Http reqest stream

How to make an HTTP Request node accept stream response?

I’m trying to use HTTP Request to query GCP Llama 3.2 API Service.
For curl request response looks like this:

ta: {"choices":[{"delta":{"content":"Once upon a time, in a small village nestled between two great mountains, there","role":"assistant"},"index":0,"logprobs":null}],"created":1727443031,"id":"2024-09-27|06:17:11.418330-07|2.65.8.115|-1089744740","model":"meta/llama3-405b-instruct-maas","object":"chat.completion.chunk","system_fingerprint":""}

data: {"choices":[{"delta":{"content":" lived a young girl named Luna. Luna was a curious and adventurous child, with","role":"assistant"},"index":0,"logprobs":null}],"created":1727443031,"id":"2024-09-27|06:17:11.418330-07|2.65.8.115|-1089744740","model":"meta/llama3-405b-instruct-maas","object":"chat.completion.chunk","system_fingerprint":""}

....
Many strings here
....

data: {"choices":[{"delta":{"content":" and determination was passed down through generations, inspiring others to protect and preserve the beauty","role":"assistant"},"index":0,"logprobs":null}],"created":1727443031,"id":"2024-09-27|06:17:11.418330-07|2.65.8.115|-1089744740","model":"meta/llama3-405b-instruct-maas","object":"chat.completion.chunk","system_fingerprint":""}

data: {"choices":[{"delta":{"content":" and magic of the world around them.","role":"assistant"},"index":0,"logprobs":null}],"created":1727443031,"id":"2024-09-27|06:17:11.418330-07|2.65.8.115|-1089744740","model":"meta/llama3-405b-instruct-maas","object":"chat.completion.chunk","system_fingerprint":""}

data: {"choices":[{"delta":{"content":"","role":"assistant"},"finish_reason":"stop","index":0,"logprobs":null}],"created":1727443031,"id":"2024-09-27|06:17:11.418330-07|2.65.8.115|-1089744740","model":"meta/llama3-405b-instruct-maas","object":"chat.completion.chunk","system_fingerprint":"","usage":{"completion_tokens":704,"prompt_tokens":4,"total_tokens":708}}

data: [DONE]

How to make the HTTP request wait for the end of response? data: [DONE]

workflow

Share the output returned by the last node

[
  {
    "data": "data: {\"choices\":[{\"delta\":{\"content\":\"Once upon a time, in a small village nestled in the rolling hills of T\",\"role\":\"assistant\"},\"index\":0,\"logprobs\":null}],\"created\":1727443528,\"id\":\"2024-09-27|06:25:28.175470-07|7.229.140.102|1361085055\",\"model\":\"meta/llama-3.2-90b-vision-instruct-maas\",\"object\":\"chat.completion.chunk\",\"system_fingerprint\":\"\"}\n\ndata: {\"choices\":[{\"delta\":{\"content\":\"uscany, there was a tiny shop called \\\"Mirabel's Marvels.\\\"\",\"role\":\"assistant\"},\"index\":0,\"logprobs\":null}],\"created\":1727443528,\"id\":\"2024-09-27|06:25:28.175470-07|7.229.140.102|1361085055\",\"model\":\"meta/llama-3.2-90b-vision-instruct-maas\",\"object\":\"chat.completion.chunk\",\"system_fingerprint\":\"\"}\n\ndata: {\"choices\":[{\"delta\":{\"content\":\" The shop was run by a kind-hearted\",\"role\":\"assistant\"},\"finish_reason\":\"length\",\"index\":0,\"logprobs\":null}],\"created\":1727443528,\"id\":\"2024-09-27|06:25:28.175470-07|7.229.140.102|1361085055\",\"model\":\"meta/llama-3.2-90b-vision-instruct-maas\",\"object\":\"chat.completion.chunk\",\"system_fingerprint\":\"\",\"usage\":{\"completion_tokens\":41,\"prompt_tokens\":4,\"total_tokens\":45}}\n\ndata: [DONE]\n\n"
  }
]

Information on your n8n setup

  • n8n version: 1.60.1
  • Database: postgres
  • n8n EXECUTIONS_PROCESS setting: regular
  • Running n8n via (k8s):
  • Operating system: n/a

It looks like your topic is missing some important information. Could you provide the following if applicable.

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

Welcome to the community @grommir !

Tip for sharing information

Pasting your n8n workflow


Ensure to copy your n8n workflow and paste it in the code block, that is in between the pairs of triple backticks, which also could be achieved by clicking </> (preformatted text) in the editor and pasting in your workflow.

```
<your workflow>
```

That implies to any JSON output you would like to share with us.


I’m afraid HTTP Request node does not support HTTP request stream. You would have to engage dedicated AI nodes/techniques to be able to leverage the streaming technology.

I tried to use AI Agent with Google Vertex Chat Model, but it doesn’t seem to support meta/llama-3.2-90b-vision-instruct-maas

Error message from AI Agent
{
  "errorMessage": "Error in sub-node ‘Google Vertex Chat Model’",
  "errorDescription": "Unsupported model",
  "errorDetails": {},
  "n8nDetails": {
    "nodeName": "Google Vertex Chat Model",
    "nodeType": "@n8n/n8n-nodes-langchain.lmChatGoogleVertex",
    "nodeVersion": 1,
    "itemIndex": 0,
    "time": "9/28/2024, 3:02:58 PM",
    "n8nVersion": "1.60.1 (Self Hosted)",
    "binaryDataMode": "default",
    "stackTrace": [
      "NodeOperationError: Error in sub-node Google Vertex Chat Model",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/NodeExecuteFunctions.js:1849:19",
      "    at processTicksAndRejections (node:internal/process/task_queues:95:5)",
      "    at async Promise.all (index 0)",
      "    at Object.getInputConnectionData (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/NodeExecuteFunctions.js:1856:19)",
      "    at Object.getInputConnectionData (/usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/NodeExecuteFunctions.js:2297:24)",
      "    at Object.toolsAgentExecute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/agents/Agent/agents/ToolsAgent/execute.js:57:19)",
      "    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/@n8n/n8n-nodes-langchain/dist/nodes/agents/Agent/Agent.node.js:351:20)",
      "    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:722:19)",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:670:51",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:1100:20"
    ]
  }
}

Which other options do I have?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.