Docker image suggestion: Add healthcheck

The idea is:

Add a healthcheck to the n8n docker images by default utilizing n8n’s /healthz status endpoint (which must be enabled for this to work - since it is not enabled by default, that may be a point of contention here? Or maybe it’s not a big deal.)

My use case:

To allow for out of the box Docker health monitoring of n8n containers. With it enabled, docker can perform it’s own management steps if a container becomes unhealthy, the user can see that it’s health in docker ps output, etc.

I think it would be beneficial to add this because:

It may help people catch configuration mishaps that only show up during runtime, or resource issues/etc that cause the health endpoint to fail.

Any resources to support this?

Are you willing to work on this?

Along with setting QUEUE_HEALTH_CHECK_ACTIVE=true to enable the /healthz endpoint, I’m using the following command that works well for the healthcheck command in my docker-compose setup:

      test: ["CMD", "sh", "-c", "(wget -q -T 5 -O - http://localhost:5678/healthz 2>/dev/null | grep -qF '{\"status\":\"ok\"}') || exit 1"]

This should be easily adaptable to the format required for use in the image/Dockerfile, perhaps the following:

HEALTHCHECK CMD (wget -q -T 5 -O - http://localhost:5678/healthz 2>/dev/null | grep -qF '{"status":"ok"}') || exit 1

This added to your docker compose file should just work:

  test: wget --spider http://localhost:5678/healthz > /dev/null 2>&1 || exit 1
  interval: 30s
  timeout: 10s
  retries: 3
  start_period: 1m00s

Thanks, yes it does work, and if you see above I basically am doing this in my own Docker compose configuration.

My proposal for a new feature was for this to be built in to the Dockerfile / docker image directly, so all users of the Docker image can benefit from healthchecks automatically.

:face_with_open_eyes_and_hand_over_mouth: so sorry, it seems I skimmed over your post :frowning:

:rofl: No worries :slight_smile:

Don’t forget to vote on your own issue! I’d also like to see this!

For reference my healthcheck:

  test: ["CMD-SHELL", "/usr/bin/wget --server-response --proxy off --no-verbose --tries=1 --timeout=3 -O /dev/null 2>&1 | grep -q 'HTTP/1.1 200 OK'"]
  interval: 20s
  retries: 3

Talking about healthchecks, mine has started failing when trying to use localhost. I exchanged it for and it works again. This looks weird, any ideas?

Tested from inside the container:

~ $ wget --spider http://localhost:5678/healthz
Connecting to localhost:5678 ([::1]:5678)
wget: can't connect to remote host: Connection refused
~ $ echo $?
~ $
~ $
~ $ wget --spider
Connecting to (
remote file exists
~ $ echo $?

Oh, I also tested your healthcheck and weirdly enough it has an exit code of 1? What am I overlooking here?

~ $ /usr/bin/wget --server-response --proxy off --no-verbose --tries=1 --timeout=3 | grep -q 'HTTP/1.1 200 OK'
Connecting to (
  HTTP/1.1 200 OK
  Content-Type: application/json; charset=utf-8
  Content-Length: 15
  ETag: W/"f-VaSQ4oDUiZblZNAEkkN+sX+q3Sg"
  Date: Sat, 23 Mar 2024 11:44:00 GMT
  Connection: close

saving to 'healthz'
healthz              100% |*************************************************************************************************************|    15  0:00:00 ETA
'healthz' saved
~ $ echo $?

I had this happen as well, along with use of the n8n API within n8n.

A recent Docker upgrade for Engine v26.0 changed how IPv6 resolution of hostnames, including localhost, is handled within containers, and I believe it is the cause for this - are you using Docker v26?

I’m not sure how to get n8n’s Docker image to listen on both IPv4 and IPv6 inside the container, but I think doing so would fix this.