Ai agent and openai chat model times out

Describe the problem/error/question

I have an automation where I am sending a file to an ai agent and open ai chat model with chatgpt 4.1 mini.

the file contains a variable amount of data, and I’m running into issues where the ai model times out. I increased the timeout to 120000, and after 6 minutes the automation still times out.

the file in question has approx 500 rows of data, all AI is doing to validating the data to make sure all required fields are there.

questions

  1. should the automation time out, or should ai process the data faster?
  2. I inserted a loop over items and sent the items in batches of 75 to the ai agent, and the automation was able to complete, however it still took approx 5 minutes for the automation to complete.

What is the error message (if any)?

automation times out.

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

automation times out.

Information on your n8n setup

  • n8n version: 1.121.3
  • Database (default: SQLite): cloud default
  • n8n EXECUTIONS_PROCESS setting (default: own, main): cloud default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud default
  • Operating system:

any help is appreciated.

It’s timing out because the workload is way too much for the ai and/or the n8n instance possibly. Even though it took long when you lowered the batch size, that’s the best you will probably get. Does it need to be a file? Maybe you could store it in a database.

thanx,

it’s for a file upload automation for a client. the automation checks for file type, extracts the data and then parses it to json to be sent back to their webhook URL for import.

its a pretty straightforward automation, the ai agent node and gpt model node take forever.

Well the file size is super big, so you can’t improve it too much. I would set the batch size to 50, so the AI also doesen’t skip over any files. AI’s sometimes take the easy way out, so setting the batch size lower will ensure more clarity over the files and to ensure it doesen’t skip files.

Hi @nixed,

What kind of file are you reading? It might be more efficient to just build a basic validation in n8n itself instead of trying to push this to an LLM