Description (Post Content)
This workflow solves a very common problem when building WhatsApp chatbots:
What happens when the user sends 2–5 messages very quickly, before the AI has time to answer?
Normally, each message triggers its own execution, causing multiple AI calls and multiple replies — which feels robotic and messy.
This workflow introduces a clean debounce mechanism using Redis + a timestamp strategy:
What the workflow does
-
Collects all incoming WhatsApp messages for the same chat within a 10-second window
-
Stores them in Redis under a single
bufferkey -
Updates a Redis
last_tstimestamp on every message -
Waits 10 seconds
-
Checks if a newer message arrived during the wait
-
If yes → this run cancels itself
-
If no → this run “wins” and sends one combined response to your AI model
-
-
After the AI replies, it cleans the buffer and timestamp so the next message starts fresh
Result
Instead of replying separately to:
Hi
Can you help me?
I want to book a session
Your AI receives:
Hi \n Can you help me? \n I want to book a session
And sends one clean reply, just like a real human would.
🔧 Tech stack
n8n (workflow automation)
Redis (buffer + timestamp)
Green API (WhatsApp inbound/outbound)
Any AI model (ChatGPT, OpenAI, Gemini, etc.)
🧠 Why it works
We use a very simple but powerful rule:
Each message gets its own timestamp (myTs)
Redis always stores the newest timestamp (lastTs)
If during the wait a newer message came in → lastTs > myTs → cancel execution
Otherwise → send the combined message buffer to the AI
This prevents duplicate replies and gives a natural conversational feel.
📎 Workflow file
Here is the full workflow JSON you can import into n8n:
Download workflow: