Built a full AI IT support bot on WhatsApp with n8n + Claude — here's the architecture

Hey community,

Just shipped something I’ve been building for weeks

and wanted to share the architecture with people who

might appreciate the technical depth.

What I built: a full AI-powered IT support system

running on WhatsApp Business API — employees just

message it and it handles tickets, password resets,

room bookings, and more.

The stack:

- n8n (13 workflows, 140+ nodes)

- Claude AI (Haiku) for intent classification

- Supabase for conversation state + memory

- Microsoft Graph API for room booking + Entra

- Twilio for WhatsApp Business API

- Odoo for helpdesk tickets

The hardest part wasn’t the AI — it was building a

state machine for 14 conversation contexts without

a framework. Also hit a nasty race condition with

parallel Supabase writes in fan-out executions.

I wrote up the full architecture in a LinkedIn

carousel if anyone wants to see it:

Happy to go deep on any part of it — the intent

router, webhook renewal, RLS policy conflicts,

or sub-workflow return data loss. All fair game.

1 Like

Also adding the LinkedIn carousel link directly

for anyone who wants to see the visual architecture

breakdown — slides cover the n8n workflow, intent

routing logic, the AI layer, and results:

Happy to answer questions here or in the comments

there.

Happy to go deeper on the architecture for anyone

curious. A few specific challenges worth discussing:

1. Race condition in parallel Supabase writes —

two branches writing to the same row, last-write

wins. Fixed by serializing into sequential chains.

2. Sub-workflow return data loss in fan-out

executions — n8n drops the return payload when

multiple sub-workflows execute in parallel.

3. Graph API webhook auto-renewal — webhooks

expire after 3 days, built an auto-renewal

job that runs before expiry.

Any of these worth a deeper breakdown?

Really solid architecture, @essam! Welcome to the community!

Building a state machine for 14 conversation contexts manually is no joke - that’s where most people give up and ship something half-baked. The race condition you hit with parallel Supabase writes in fan-out is a classic gotcha too.

A couple things that helped me in similar setups:

  • Add a short random jitter (50-150ms) before each Supabase write in parallel branches - reduces collision probability a lot
  • For the state machine, I eventually started storing a lock flag in Supabase with a TTL so concurrent messages don’t corrupt the state

Would love to see the intent router logic if you’re open to sharing. I’ve been running AI chatbots on Facebook Messenger and Zalo with n8n + Gemini, and intent classification is always the trickiest part to tune.

Great work shipping this!

1 Like

Thank you nguyenthieutoan! Really appreciate the kind words and the tips.

The jitter idea is smart — I went with full serialization (sequential chain, never parallel)
which is more conservative but guarantees order. Your approach is more elegant for high-frequency
scenarios though.

The lock flag with TTL is exactly what I was thinking for the next iteration. Right now I’m using a version column for optimistic locking but a proper distributed lock would be cleaner.

The intent router uses Claude Haiku with a structured JSON output prompt — 21 priority
blocks in a Switch node. The key insight was separating “what did the user mean” (Claude)
from “what should we do about it” (Switch node logic). Happy to share more detail on the prompt
structure if useful.

Facebook Messenger + Zalo is interesting —how are you handling the session state across platforms? Same Supabase pattern or something different?