I’m running n8n self-hosted through TrueNAS and after updating it to the latest version (2.5.1), n8n seems to have lost track of time..? The update/creation times for my workflows are correct, but the Execution Run Times are very negative numbers, even though they’re still running fine.
What is the error message (if any)?
N/A
Please share your workflow
N/A - seems unrelated to the workflow as it’s happening on both of my active ones.
thanks for responding! checked my TrueNAS datetimectl and it is accurate, NTP service enabled. supposedly, the TrueNAS app manages those timezone environment variables so I cannot set them myself, but it wasn’t an issue before today.. i did also check the container logs, and it prints out an accurate date/time in UTC on launch. very confusing! let me know if you have any other suggestions, i’d really appreciate it
After looking at the execution list more closely, this doesn’t appear to be a workflow or performance issue.
The key detail is that the negative runtimes are very consistent at about -18,000 seconds, which is almost exactly 5 hours. There’s also a clear cutover point in time: executions earlier in the day have normal runtimes (3–5 seconds), and executions after that point suddenly show large negative values.
That pattern strongly suggests a clock mismatch, not drifting time or slow executions. In practice, this means:
startedAt is being written using one clock source
stoppedAt / “now” is being evaluated using another
One of those clocks is ~5 hours offset from the other (UTC vs local time)
When n8n calculates runtime as stoppedAt - startedAt, the execution appears to have “started in the future,” which results in a large negative duration.
This typically happens after an update or redeploy when:
The app/container is recreated with a different timezone or clock source
The execution worker and main process inherit different time settings
Or the database session timezone differs from the runtime process
It also explains why:
The timestamps themselves look correct in the UI
Workflows still run successfully
The issue starts abruptly rather than gradually
At this point, I’d suspect a container/app lifecycle change rather than NTP or system time (especially since host time and logs look correct).
Things that usually confirm or resolve it:
Comparing epoch time inside the n8n container vs Postgres (date -u vs SELECT EXTRACT(EPOCH FROM now()))
Forcing EXECUTIONS_PROCESS=main to eliminate cross-process timing differences
Fully redeploying the n8n app (not just restarting) so everything inherits the same clock source
Older executions with negative runtimes won’t correct themselves, but new executions should return to normal once the clock sources are aligned.
Hopefully this helps narrow it down — the consistent 5-hour offset is the big clue here.
I’m also having this identical issue - operating on TrueNAS, but I verified all instances (TrueNAS, n8n container, postgres, redis) timezone variables are set identically & output the same times/time zones.
This happened after a recent n8n update. Anyone find a confirmed solution to this? I’ve tried restarting the app, redeploying, etc. I haven’t restarted the TrueNAS box yet, though.
I can confirm the epoch times between the n8n container & postgres containers are DIFFERENT. I already have EXECUTIONS_PROCESS=main set.
How would one “fully redeploy” the app? Would this involve deleting it & reinstalling?
hey paddyb - glad to hear I’m not the only one, lol. i actually ended up resolving the issue by simply rolling back to the version i was running before updating. after checking the n8n GitHub, i realized that the version i updated to (2.5.1) was still in beta. i don’t know if they’re aware of this issue in the newer version, but i did the rollback in hopes the next time i choose to update it won’t happen again. rolling back version on TrueNAS is quite easy, but if you do, keep in mind there will still be a little bit of weird behavior temporarily. after i rolled back, more recent workflow runs wouldn’t show up at the top of the list, rather about 5 hours earlier because of the negative values. this resolved itself after a few hours, i assume about the amount of hours that my timezone is offset from UTC.
i didn’t try a full redeploy, but i assume they mean fully tearing down the docker container and recreating it. that assumes you’re using the “host path” volume mounts though, else your data/automations may be deleted. FWIW though, the reply/profile from Michael looks very AI generated to me, and may not actually be helpful.
Thanks for the reply. I agree - the reply seemed AI to me, too!
I don’t want to get into totally tearing down the app & rebuilding - that’s a lot of headache, and I have some mission-critical workflows running. I’m on v2.6.2, so I’m guessing it’s another bug introduced in a latest-build type of scenario. I will try rolling it back a couple of versions, and then wait for newer versions to come out that fixes the bug.
coming back to this post as i’ve just seen a PR on the n8n GitHub that appears to acnowledge and offer a solution to the problem we ran into. seems like it’s Postgres specific, but it was a breaking change included in 2.5.0. here’s the link if you’d like to follow along with any (hopefully quick) progress:
I configured the timezone in the Truenas settings (+7 timezone). I checked in the workflow with timenow() and they are also showing correctly. This means the system has correctly recognized the timezone settings. However, the only display error is in the UI of the executions history. I think the developer didn’t pay attention to the timezone configuration saved in the settings, causing it to display incorrectly!