Trying to use Loop Over items to avoid Timeout with OpenAI Request

As mentioned in the title, I’ve got a HTTP Request to OpenAI that keeps timing out. Looking for a way to speed up the process. Saw a thread ([HTTP Request - timeout of 300000ms exceeded] Can I extend timeout?) previous talking about using Loop Over Items as a way to batch the request.

However the thread was closed so can’t message in line.

Looking for some help to either use the Loop Over Items correctly OR find another way to avoid the timeout.

I think that your loop flow is wrong. I also use OpenAI calling through n8n, but the way I have it configured is to use a “Wait” node between batches so that I can time them as per my requirement. I am sure you can also do the same in the HTTP Request node’s settings too. This is my sample:

We have to connect the loop node’s end line to the loop node itself and not to the node before that. You can modify the timing of the requests as per your plan’s TPM limits.

2 Likes

Thanks for the speedy reply Jayavel. I’m a little confused though as to why the wait goes before the OpenAI call (in my case HTTP). Does this help in terms of the batching I’m trying to achieve with this request?

Updated

Just ran 3 tests:
1, Loop Over Items Connected (correctly): 84 Seconds
2. Loop Over Items Disconnected (previous): 180 seconds
3. Loop Over Items Connected (correctly): 191 Seconds

Initially I thought we had a time savings of 53% only to be undone by the 3rd test. Seems like a long time for an API call no? Is this normal or should I be expecting something closer to < 30 seconds.

Full disclosure I’m fairly new to all of this.

Appreciate any and all help.

Hey @First_Spark_Digital,

From your response, I am guessing you were able to mitigate the Timeout error but want to speed up the response times now.

Let me share my experience with this:

I can only make a set number of calls per minute for the workflows that I have set up and for the usage tier I have in OpenAI.

There are two ways (that I know of) to achieve it. One is to use the Wait node with the in-built OpenAI node that n8n has. It is easy to manage, and based on your account’s tier limits in OpenAI, you can time the requests accordingly if you have multiple inputs that need to be processed.

For example, I get at least 35 to 50 entries that must be sent to OpenAI to generate completion. I can send them in bulk, but sometimes, the token count is significant, and I also get an error. That is when I started using this node to split them into separate calls and make the flow wait a few seconds before hitting OpenAI, to avoid rate limit errors.

This has given consistent results of under 1 minute response times from OpenAI for each call. They often turn up in less than 40 seconds for 2K input tokens and 1K to 1.5K output tokens.

The other way is to use the HTTP Request node to handle the batching and delay.

To batch the inputs and delay each call:

Adding a retry step just to be sure:

Also, I could see that batching the requests one at a time has solved the issue for the other user (post that you shared).

Maybe if you could share your workflow here, someone from n8n’s team can help further.

1 Like

Yea thanks for all the help so far Jayavel. Much appreciated.

I think while the Loop Over Items helps the timeout, I’m not sure yet how it will work with the batching. Because in theory I’ll only have one request sent to OpenAI. If I get multiple hits to my website (intake form) then in theory the batching would work then I think.

{
“meta”: {
“instanceId”: “cec86eeddef9ce9510df5fa3d831594fc68385b4da5fcc5a2c862e33863b9ffe”
},
“nodes”: [
{
“parameters”: {
“method”: “POST”,
“url”: “https://api.openai.com/v1/chat/completions”,
“authentication”: “predefinedCredentialType”,
“nodeCredentialType”: “openAiApi”,
“sendBody”: true,
“specifyBody”: “json”,
“jsonBody”: “=”,
“options”: {}
},
“id”: “922f1a8e-096c-41a3-8d4b-a90057b0f22d”,
“name”: “ChatGPT Prompt”,
“type”: “n8n-nodes-base.httpRequest”,
“typeVersion”: 4.1,
“position”: [
2300,
580
],
“credentials”: {
“openAiApi”: {
“id”: “annZJgpkrYsgn9Y1”,
“name”: “OpenAi account”
}
}
},
{
“parameters”: {
“jsCode”: “let data = $input.all();\n\nconst removeProblematicCharacters = (text) => {\n return text.replace(/“|"|”/g, ‘\\"’);\n };\n\nlet result = ;\ndata.forEach((item, index) => {\n console.log("keys are ", item.json)\n\n let currentRes = {}\n for (let key in item.json) {\n let val = item.json[key];\n\n let sanitized;\n if (Array.isArray(val)) {\n let joined = val.join(" ");\n sanitized = removeProblematicCharacters(joined);\n } else {\n sanitized = removeProblematicCharacters(val);\n }\n \n currentRes[key] = sanitized\n }\n\n result.push(currentRes);\n});\n\nreturn result;”
},
“id”: “1e5347ff-52a3-4b53-b912-b4b462553441”,
“name”: “Code Cleanup 1”,
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
1700,
300
]
},
{
“parameters”: {
“mode”: “runOnceForEachItem”,
“jsCode”: “\n// this is the gpt ‘content’ value from the response of the http-request node\nlet content = $input["item"]["json"]["choices"][0]["message"]["content"];\n\nreturn {\n "scores": JSON.parse(content)["scores"]\n };”
},
“id”: “d3318543-ca7b-4806-b2e7-d1294c55f8c8”,
“name”: “Code Cleanup 2”,
“type”: “n8n-nodes-base.code”,
“typeVersion”: 2,
“position”: [
2120,
280
]
},
{
“parameters”: {
“options”: {}
},
“id”: “9f055919-a712-496e-8106-30e7ee517c83”,
“name”: “Loop Over Items”,
“type”: “n8n-nodes-base.splitInBatches”,
“typeVersion”: 3,
“position”: [
1900,
300
]
},
{
“parameters”: {
“amount”: 3,
“unit”: “seconds”
},
“id”: “0987df1e-6451-4334-b78f-a55a386fd645”,
“name”: “Wait”,
“type”: “n8n-nodes-base.wait”,
“typeVersion”: 1,
“position”: [
2120,
460
],
“webhookId”: “ddb22e91-fbbe-4ffc-96ba-b093fb91f801”
}
],
“connections”: {
“ChatGPT Prompt”: {
“main”: [
[
{
“node”: “Loop Over Items”,
“type”: “main”,
“index”: 0
}
]
]
},
“Code Cleanup 1”: {
“main”: [
[
{
“node”: “Loop Over Items”,
“type”: “main”,
“index”: 0
}
]
]
},
“Loop Over Items”: {
“main”: [
[
{
“node”: “Code Cleanup 2”,
“type”: “main”,
“index”: 0
}
],
[
{
“node”: “Wait”,
“type”: “main”,
“index”: 0
}
]
]
},
“Wait”: {
“main”: [
[
{
“node”: “ChatGPT Prompt”,
“type”: “main”,
“index”: 0
}
]
]
}
}
}

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.