Hi everyone,
I’m looking for some architectural advice on moving a messy manual process into a structured n8n workflow.
The Current Problem
Our team handles intake via a form that creates a Jira ticket. However, the “Requirement Collection” stage is currently a disaster. We need answers to 20 specific questions (Campaign objectives, brand strategy, etc.), but these currently come in via ad-hoc Slack messages or calls. They are often incomplete or inconsistent.
The Goal
I want to build an n8n workflow that:
Triggers via Slack: When a user interacts with a Slack Bot/Slash command.
Guided Interview: The bot asks the 20 questions one by one (to avoid a massive, intimidating wall of text).
AI Validation: As the user answers, n8n passes the response to an LLM (like OpenAI or Anthropic) to compare it against our “Gold Standard” examples.
Feedback Loop: If an answer is too vague e.g., “Make it go viral”, the bot asks them to elaborate before moving to the next question.
Jira Update: Once all 20 questions are validated, the Jira ticket is updated with the full, high-quality answers.
What is the error message (if any)?
Please share your workflow
(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)
Hi @nikhilnegi, welcome to the n8n community!
Based on what I researched, this can be implemented in n8n using three main parts: Slack, AI validation, and Jira. I would trigger the workflow with a Slack Slash Command or Slack Trigger that calls an n8n webhook, then run a guided interview where the bot asks the questions one by one instead of sending all of them at once. Each answer can be sent to an LLM together with a few “gold standard” examples so the model can evaluate if the response is acceptable or too vague. If the answer is vague, the bot asks the user to elaborate; if it is acceptable, the workflow stores the response and moves to the next question while keeping the conversation context. Once all answers are validated, the workflow updates the Jira ticket using the Jira node with the collected information.
I think you should implement state management between Slack messages, every incoming Slack event is a fresh workflow execution so you need somewhere to store “where we are“ in the interview. Here’s is my approach:
A google sheet or any DB row per user/session, with columns for current_question_index, answers_so_far(JSON), and jira_ticket_id.
The core architecture:
Slack trigger as webhook receives the user’s message → reads their session from sheet or DB using their Slack user_id as the key.
IF node: Is this the first message or /command ? If YES, initialize session with question index = 0 and empty answers array then send Question 1.
If it’s a reply, grab the current question index, send that answer + your gold standard examples to the LLM with a prompt like “Is the answer specific and actionable? Return JSON:{valid: true/false, reason: string}“
IF node on validation results: Valid → increment question index, store answer, send next question or trigger Jira update if index=20
Invalid → send the LLM’s reason back as a Slack follow up prompt, keep index unchanged
Jira node at the end to update the ticket with all 20 collected answers.
The biggest headache here is going to be state management since every Slack message fires a totally separate workflow execution, so you need a database row per user tracking which question they’re on and their answers so far. I’d honestly start from this template Conversational interviews with AI agents and n8n forms | n8n workflow template and swap the forms trigger for a Slack trigger, the core loop logic is basically what you need.
@OMGItsDerek Following up on : Would it be more effective to implement a Unique ID (as suggested in @achamm’s template) rather than relying on thread_ts?
In my testing, I noticed the ts value updates with every message. I am assuming your idea is Bot replying in a thread until the Questions are completed.
On another note: We are thinking of implementing a session timeout. If a user doesn’t complete the questions within a set window (e.g., 2 hours), the workflow should terminate the current session and delete the entry. I’m thinking we can check the session timestamp; if it’s expired and questions are not completed , we clear the entry so a fresh session can begin."
the answers above cover the architecture really well. one practical tip from building something similar: use thread_ts (not just user_id) as your session key. otherwise you can’t handle a user running two separate intakes in parallel. also for the ai validation step — keep the prompt tight and force json output. something like ‘is this answer specific and actionable? respond only with {valid: true/false, reason: “…”}’. makes the branching logic clean. and +1 on external store over static data — static data doesn’t survive redeploys.