Kafka trigger always consume data from the offset consumed last time.
how to change setting to make it consume new produced data?
To Reproduce
- I set a kafka trigger in my workflow, but I don’t active it so it won’t be executed all the time.
- When I active the workflow, there will be a huge amount of data to consume since last time , which causes my n8n server run out of memory.
- Currently, I solve it by modify group id manually every time I need to active my workflow , but I really suffer from that.
I’ve read this issue https://github.com/n8n-io/n8n/issues/2479
I tried to modify the EXECUTIONS_PROCESS=main environment, but n8n still hangs to handle those data. Also other workflows will be effected.
Expected behavior
Is there any settings which can enable kafka trigger only consume new produced data, NOT consume from where it consumed last time?