The idea is:
Cannot create custom AI chat model nodes (community-nodes) for unintegrated API endpoints that keep & passthrough “reasoning_content” after the LLM calls a tool. additional_kwargs cannot carry it, it gets whitelisted out. The DeepSeekTranslator function is doing what’s necessary but is only available for the ChatDeepSeek, but this kind of thing should made be available to be used in custom nodes also. I made it cache the content with memory leak protection (TTL and limits), but why would I have to do it when there is aalready a solution in there?