End-to-End Workflow for OpenAI Batch API Calls in n8n?

Has anyone successfully implemented an end-to-end logic for OpenAI Batch API calls in n8n? Specifically, creating the batch in a workflow, calling the Batch API when it’s ready, and saving the results in order. Looking for insights!

  • n8n version: 1.69.2
  • Database (default: SQLite): QLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own, main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): self-hosted in google cloud
  • Operating system: Windows10

hi @Kiremit

I believe we don’t have batchAPI already implemented by default but there you could create a workflow to create the batch call and call openai directly as you say.

I’ll let someone else from the community share if they have implemented this already :slight_smile:

It could be done manually yes but the problem is still more regarding the batch itself… and how to automate that.