How to make sure http requests are not lost due to not having reached wait node yet

So, I am trying to make a Slack chatbot that can take availabilities from several users, and when everyone has provided valid times/dates, it proceeds to choose a time in common for everyone.
In order to use the Slack Events API (which only allows to send event data to one URI) to receive messages in N8n, I made a “message orchestrator” that when it receives an event (e.g.: user message), it compares the channel ID to a Firestore database I am using to store key/value pairs of Slack channel IDs/N8n workflow webhooks, and sends the message to the related workflow.
The problem is, if the workflow assigned to the channel is not currently waiting for the message (as in, processing the previous message), the message itself gets dropped, thus potentially losing important information or not being able to complete the workflow.
I know that probably the best solution would be to implement a queue to get triggered by an HTTP request before the workflows get to the Wait node to then send the queued messages, but I am failing to think of a way to do it that would not suffer from the same problems I am facing right now

Bellow is a simplified version of the relevant parts of both the main workflow and the orchestrator workflow.

Information on your n8n setup

  • n8n version: 1.25.0
  • Database (default: SQLite): SQLite
  • n8n EXECUTIONS_PROCESS setting (default: own, main): own, main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system: Windows 11

Hi @Carrico :wave: I don’t exactly have the best answer here, but have you looked into something like RabbitMQ? You could then properly queue the messages, if I’m understanding this correctly :thinking:

Hi @EmeraldHerald, it would likely had been the solution hadn’t I thought that the need to either manually check the queue in the middle of the workflow would have suffered from a similar problem (as in, there was the chance to not be notified of a new message). What I ended up doing was passing some of the processing messages part of the workflow to the orchestrator, in order to reduce as much time spend between waiting for new messages and thus, the chance to drop them.

1 Like