Request prompt caching support for Claude

,

The idea is:

To add support for Claude’s prompt caching technology in n8n’s AI Agent node by implementing a toggle switch that allows users to enable or disable this feature. This would allow users to optimize cost efficiency when making repeated calls to Claude’s API with similar prompts.

My use case:

When working with Claude in automated workflows, I frequently send similar prompts with minor variations. Without prompt caching enabled, I’m paying for the full context processing each time, even when much of the prompt remains unchanged between requests. Adding a simple toggle would allow me to control when this optimization is applied.

I think it would be beneficial to add this because:

  1. Cost optimization: Prompt caching can significantly reduce API costs when working with similar prompts
  2. Performance improvement: Cached prompts can lead to faster response times
  3. Parity with OpenAI: This feature is automatically handled in OpenAI implementations, but needs to be manually enabled in Claude
  4. User control: A toggle gives users the flexibility to decide when caching makes sense for their specific use case

Any resources to support this?

Are you willing to work on this?

While my programming skills are limited, I’m willing to assist with testing, providing feedback, or helping in other ways that match my skill level. I’m very interested in seeing this feature implemented.

Joining this request, but for all models that support it!!

It’s a running topic :