Hi everyone,
I’m working on a scenario where I want to build a bot that holds data about a series of tasks and projects, and users can interact with it to ask questions. For example:
- “Which tasks are left for today?”
- “Which tasks haven’t been completed yet?”
In this scenario, the bot should generate SQL queries based on the database structure defined in the system message and the user’s input to fetch the relevant data.
Here’s an added complexity:
- Some tasks may contain a query snippet that the user doesn’t fully understand and wants to improve, so they will send that and ask about it.
- so the bot needs to determine when it should execute a query and when it should just analyze or provide an explanation.
To handle this, I thought about creating two separate AI Agents:
- One for generating queries and fetching data
- Another for answering questions and analyzing
Security is also a concern:
- We want to ensure that the generated queries are safe and don’t allow SQL injection or unauthorized access.
- However, it seems that we cannot place a validation layer directly between the agent and its tool, and we also cannot fully trust the system message or the agent itself.
A potential solution I considered:
- Create a separate workflow for validation and data retrieval, and provide that workflow as a tool to the agent.
- Use a second agent for questions and analysis.
- Alternatively, have two AI Agents in one workflow: the first one generates the query, then validation nodes, then the database fetch, and finally pass the data to the second agent.
However, this approach doesn’t seem very efficient.
My questions are:
- What is the best practice for separating responsibilities between AI Agents and workflows in a scenario like this where databases and sensitive data are involved?
- How secure is this kind of setup, and what methods would you recommend to ensure query safety and data security?
I’d really appreciate any experiences, tips, or best practices you can share for handling this in n8n.