Workflow keeps breaking :(

Hey!
I am trying to process a large dataset (1285 text claims) in n8n using multiple OpenAI (GPT) nodes (1 Agent, 1 HTTP Request), but my workflow keeps breaking after a certain number of items. No matter what I try, it never runs through completely. I constantly hit OpenAI’s token-based rate limits. When that happens, nodes fail with 429 errors. I tried to fix this by adding Wait nodes, IF conditions, batching logic, and reducing output tokens, but it is still very hard to find a stable configuration. Overall, my main problem is that I cannot reliably control execution speed and rate limits in n8n for GPT-heavy workflows, so long runs keep aborting before completion.

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:

I’m still learning, but if you were able to share the exact error messages, it would help us help you.

okay i will when it happens the next time! I’m running it again right now.
I also hate that you can’t start from when the error occurred, why do I always have to start from scratch? is there any way to not start all over again?

Some things to try:

  • It looks like you’re already doing batch processing in n8n, but I’m not sure if it’s handled at the best point.
  • You’re also able to do batch processing on certain versions of ChatGPT.
  • Have you viewed the prior execution under ‘Executions’ and troubleshooted from there?
  • You can save the execution progress by Going to your settings and changing: ‘Save execution progress’ to Save; which should prevent having to redo the execution from the beginning.

I cut out a part of my workflow in the top post! I’m trying to design a hybrid pipeline for fake news detection using BERT and GPT. BERT I ran on Colab and converted my results to a csv which I added here. Then I tried the loop workflow (batching size 1) to send each claim through the loop to get a fake news score by gpt. Afterwards I will try to combine it the score again with the results of BERT. That’s why I did the batch at the beginning..

I’m a total beginner and I really don’t understand what you mean with:

  • Have you viewed the prior execution under ‘Executions’ and troubleshooted from there?

but I will for sure try the last bullet! Thanks a lot!

Thanks for the additional info. There are 3 primary tabs in your workflows: Editor, Executions, Evaluations

Executions allow you to view your prior executions and see at which point they failed. You can copy/pin those failed executions into your editor to troubleshoot them with a bit more context. If you have access to the AI, it would allow you to adjust from there as well.

You may also consider using n8n Data Tables in various ways to either queue/store data and then process it afterward, you could add a binary column that only returns true/false whenever the data is successfully processed.

Thank you so much! I will for sure try that. I tried to save every run into a file (but it only saves the runs (of my batches) individually, so can’t use that. I couldn’t figure out a way to save every run in 1 table.
There are so many features I’m not aware of. I have to present my thesis in 1 week and I don’t have my results yet, I’m very frustrated.