Enable Prompt Caching for model in AI Agent

The idea is:

Set the toggle button to set Prompt Caching for each models.

My use case:

I use llm models in AI agent node , using system prompts I would like to save my token.
The prompt that called has so long and always same prefix for sytem prompt.

I think it would be beneficial to add this because:

Gemini model has both implicit chaching and explicit caching , I hope explicitly set the chaching.

Claude also has a parameter to set the prompt chaching.

Any resources to support this?

Are you willing to work on this?

Not the advanced level but I could work with for testing or something.