Has anyone successfully implemented an end-to-end logic for OpenAI Batch API calls in n8n? Specifically, creating the batch in a workflow, calling the Batch API when it’s ready, and saving the results in order. Looking for insights!
- n8n version: 1.69.2
- Database (default: SQLite): QLite
- n8n EXECUTIONS_PROCESS setting (default: own, main): own, main
- Running n8n via (Docker, npm, n8n cloud, desktop app): self-hosted in google cloud
- Operating system: Windows10