How can I wait for multiple user messages before generating a single AI reply?

Hi all,

I’m building an automation in n8n that listens for new LinkedIn messages and drafts a reply using GPT, which I then send to Slack for approval.

Right now, the flow works well, but there’s one important upgrade I’m trying to achieve:

I want the workflow to wait for a few minutes after someone messages me, collect all the messages they’ve sent in that window, and then generate one smart reply using all of that context, instead of crafting a separate response to each message.

Thanks in advance for any guidance or examples you can share! :raising_hands:
I’m happy to share my existing workflow JSON if it helps.

Other details:

  1. Self-hosted n8n (Docker)
  2. Slack + GPT used in the workflow
  3. No database connected externally — just built-in storage

Nobody has an answer for me?

So your current workflow should like

webhook → reply

Then you want to modify like

webhook -> wait 3 minutes -> gather all message -> reply right?

A simple solution should be store all the message into one place like data tables or sheet

But you need to know which is the first and last execution.


If user send 3 messages : it will trigger 3 times in your n8n.

The problem is how you can stop the other executions from being reply too.


My first thought :

When first message comes → create a lock → wait 5 minutes

Other message comes → see there is a lock → record the message to data tables and abort.

After 5 minutes. The first message execution gather all message and reply → release the lock.

Sounds work to you ?

I’ve come across this question before, you might want to read it, it’s useful: How to make my workflow aggregate multiple consecutive messages before sending to AI?

Here’s basically the architecture for buffering incoming messages:

  • Add incoming messages to a queue for example a data table (insert row)
  • Wait for 3 minutes (wait)
  • Retrieve the queued messages (get rows)
  • Delete them from the queue (delete rows)
  • Aggregate the messages (aggregate)
  • Send them to the AI agent

If you share your workflow or what you’ve tried so far, I’m sure many will jump in with more helpful suggestions..