The idea is:
Being able to use “prompt_cache_retention” property on OpenAI node.
My use case:
I want to have faster responses and pay less for AI usage.
I think it would be beneficial to add this because:
Faster and cheaper OpenAI node execution
Any resources to support this?
https://platform.openai.com/docs/guides/prompt-caching#configure-per-request
Are you willing to work on this?
I would like to, yes.