How to make Kafka Trigger consume new produced data other than from where consumed last time

Kafka trigger always consume data from the offset consumed last time.
how to change setting to make it consume new produced data?

To Reproduce

  1. I set a kafka trigger in my workflow, but I don’t active it so it won’t be executed all the time.
  2. When I active the workflow, there will be a huge amount of data to consume since last time , which causes my n8n server run out of memory.
  3. Currently, I solve it by modify group id manually every time I need to active my workflow , but I really suffer from that.

I’ve read this issue
I tried to modify the EXECUTIONS_PROCESS=main environment, but n8n still hangs to handle those data. Also other workflows will be effected.

Expected behavior
Is there any settings which can enable kafka trigger only consume new produced data, NOT consume from where it consumed last time?

Hi @dolphin111213, I had a quick look into this, but it doesn’t seem like that’s currently an option offered by our trigger node. The KafkaJS library does allow seeking a specific offset in principle, so I think this should technically be possible.

Creating this feature request was the first step in having this implemented eventually, this will help both our product team as well as the community with understanding how much demand there is for this.

Thanks for your support!
Really looking forward to see this feature in future version. How to subscribe your release notes or maybe I can receive notification here so I can try it asap?