Add support for advanced OpenAI request parameters to the OpenAI node, including:
-
Service Tier / Priority Processing
-
Token calculation breakdown:
-
Input tokens
-
Output tokens
-
Cached tokens (where applicable)
-
These options would live under an “Other” or “Advanced” section to avoid cluttering the default UI.
Use Case
For production workflows using OpenAI at scale, especially latency-sensitive or cost-sensitive pipelines, it’s important to:
-
Explicitly control service tiers / priority processing
-
Track token usage more precisely, including cached tokens
-
Make cost, performance, and quota tradeoffs visible and configurable at the node level
Right now, these capabilities are available via the OpenAI API but not exposed in the node configuration.
Why This Is Useful
This feature would:
-
Enable fine-grained cost control and observability
-
Improve performance tuning for real-time or user-facing flows
-
Reduce the need for custom wrappers or external instrumentation
-
Keep the node aligned with current OpenAI platform capabilities
Without this, users have limited visibility into token economics and no way to opt into priority processing directly from the node.
Supporting Resources
OpenAI documentation on priority processing and service tiers:
https://platform.openai.com/docs/guides/priority-processing