Do you use “human in the loop” control in your n8n workflows? If yes, how often?

I’ve been experimenting with adding human approval steps into automated workflows,like pausing a process until someone reviews and approves/rejects an item.
Curious how others here approach it:
• ⁠Do you have human review steps in your n8n workflows?
• ⁠If yes, what’s your main use case, quality checks, compliance, client approvals, etc.?
• ⁠How often do you find yourself using them vs. letting the workflow run 100% automatically?
Would love to hear examples of when it’s worth keeping a human in the loop, and when it’s better to trust the automation entirely.

Describe the problem/error/question

What is the error message (if any)?

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

Share the output returned by the last node

Information on your n8n setup

  • n8n version:
  • Database (default: SQLite):
  • n8n EXECUTIONS_PROCESS setting (default: own, main):
  • Running n8n via (Docker, npm, n8n cloud, desktop app):
  • Operating system:
1 Like

Yes I use it often. For everything I don’t want AI to mess up like writing to a client or generating requirements backlog etc.

You can make a feedback loop until it’s fine and you are okay with the AI output.

I often log inputs and outputs in a database and validate AI action so I can use the Evaluation to refine and ameliorate the prompt engineering. One of my best use case and method to get to 99% success with AI decision making

2 Likes