Hi everyone,
Can someone help me with this wassenger? It can be paid if necessary. I’m doing this workflow. I did the same thing in your YT video, but wflow isn’t working. I did everything from A to Z but it doesn’t work.
Can a friend explain it to me via g.meet in 5 minutes if possible?
Can you send the error that is generating for you?
Hi Erick thanks for your reply, I’m sending video,
There’s actually no error code. At first, it seems to be working, but WhatsApp won’t respond.
When I ask for audio or video, the switch doesn’t work either. I did exactly what’s described here, but WhatsApp won’t respond.
Totally understand the confusion—Wassenger’s REST API looks straightforward on paper, but the moment you wire it into an AI-driven n8n workflow a few hidden gotchas appear.
First, make sure the Webhook node that receives the inbound WhatsApp message is running before you send the test ping from Wassenger’s dashboard; otherwise the initial handshake times out and the execution you see in the YouTube demo never even starts. I like to add a simple “Ping
Telegram” branch right after the Webhook so I can watch live telemetry and confirm the payload lands in n8n.
Second, if you are feeding that WhatsApp text into an Agent Executor (LangChain or OpenAI Function-calling) remember that Wassenger uses a 2-step send model: messages/send returns a ticket first and the actual delivery status comes back via messages/ack. Because of that you need either:
• A Wait → HTTP Request loop that polls messages/ack until you see "delivered", or
• Better, an additional Webhook subscribed to messages.onStatus so n8n reacts asynchronously. The second pattern eliminates hanging executions and keeps token costs down because your agent doesn’t have to stay alive during polling.
Finally, be mindful of the WhatsApp 24-hour session window. We store the wa_session_expires_at timestamp in n8n’s static data and let a small Cron workflow refresh the session automatically—no more “message failed: template required” surprises.
Quick question back to you: are you planning to let the agent retrieve context (customer order history, support tickets, etc.) before answering? If so, we’ve had luck caching that data in a Redis instance and injecting it into the prompt right after the Wassenger Webhook—happy to share details if useful!
Hi Poly_Ajans, thanks for your reply, but I created a simple workflow with wassenger and it works fine, the problem is the workflow I mentioned, I’m stuck there somewhere, normally there is no problem connecting with wassenger
https://drive.google.com/file/d/12E14-ChSabDOto-Dw2zx84fyAUy_ai1V/view?usp=sharing
Can we review it together? If I share my screen via Google Meet, can you help me?
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.
