Hi @pbdco, many thanks for the detailed description you have provided! I have managed to reproduce this in the meantime and what is happening here is this:
The RabbitMQ trigger doesn’t fetch all queued items in a single execution. Instead, it will start a single workflow execution for each item it receives, meaning you have multiple executions running in parallel.
So we now have each workflow execution using the Google Sheets node which itself is a wrapper around the Google Sheets API. It uses this method to append data to an existing sheet. So it would appear the behaviour you are seeing here is simply how Google’s API processes (nearly) parallel requests. My guess would be that it internally just checks for the next available row and then writes the data in there. If there are overlapping requests it might use the same “available” row for each.
So if you have the option of using a proper database not behaving like this (such as PostgreSQL), this would certainly be preferrable here.
You could also have it act as an intermediary using an additional workflow if you want your data to end up in a Google Sheet eventually: Have a workflow using the RabbitMQ to write into PostgreSQL and another workflow writing from PostgreSQL to Google Sheets in regular intervals to avoid the aforementioned overlap.