Is there a way to store a set of instructions (like how the AI should respond, its tone, the steps, etc.) in memory β so that the AI chatbot can refer to them when needed, without having to include the full system prompt every time in the workflow?
it is usually done via system prompt. It is added only once. but you can try giving dynamic input in user prompt clearly defining that these are instructions and add this is user input.
what you have in mind?
What you are probably looking for is context (or prompt) caching, which is currently available for 2 major AI assistants (last I checked) - Gemini and Claude, but neither is natively integrated into n8n yet.
my current situation is that my credits on OpenAI got drained because it kept reprompting all the conditions at once and within like 6 hours, 100k Credits was used. is there any other way for me to use this workflow without loosing too many credits all at once?
hi there @gorgle_fornel ,on the openAI chat model thereβs an option to limit the maximum number of tokens for it to consume on 1 execution
i think u can use tht to limit how many tokens used per execution
also, are you sure that the token is drained out because of the prompting and not because other stuff
you can make sure of this by test executing the workflow and see on the logs how much token does it take
like this one
hi there, if i help solve your question please mark my answer as the solution @gorgle_fornel

