I am currently developing an AI chat agent for a travel company. My technical stack includes a self-hosted n8n instance, Google’s Gemini 2.5, and AWS. I believe the default database is SQLite, as I’ve used a standard Docker setup.
I’m relatively new to n8n and would appreciate some advice on how to avoid potential slowdowns and errors in my workflow. Specifically, I’m wondering if n8n can comfortably handle a load of approximately 200-600 messages per day.
What best practices should I follow to ensure my setup is robust and scalable?
Thank you!”
and this is the workflow , also i will change the google sheet to what the company have and the telegram node will be change to webhook
200-600 Messages a day is a huge amount. At least an amount that needs a bulletproof system. Since you are working with an AI Agent you have to make sure the AI isn’t used outside it’s case. Or most importantly, isn’t abused.
The system prompt you have is detailed and seems good. But why not add an another AI node, a classifier that uses a cheaper model, so your main Agent isn’t overworked?
You could also add a rate-limiting system so users don’t spam and deplete your resources.
A Code node which detects messages that contain files, images, stickers or anything but text could be helpful based on your case.
These are just things I do in my workflows. Long story short imagine every kind of user that might message the agent and take precautions. Especially Jailbreak messages; oldschool but in some cases they work.
Thanks, for the ai that uses cheaper is not an option bcs my language is not supported by every ai there are little ai that supports my language so its a problem, and also i have a plan to add limits its an important step, for file and stuff i do not think anyone send those just messege
And i count extra messages i do not think it passes 200-300 messages probable less than that ( its peak probably not everyday) so i just said just incase
But over all could n8n support that? Or just leave the idea and stop using it