Airtable Trigger retriggers after updating N8N

Describe the issue/error/question

I have N8N running on my own server. After updating N8N from 0.150.0 to 0.159.0, my active workflow with Airtable Trigger has rerun for all table entries. The trigger usually gets triggered on the “created” field after a new row has been created. That was unfortunate, as it has resent emails to people. Was that to be expected? Should I have deactivated all “active” workflows with Airtable Triggers? I guess then it would have retriggered everybody once I reactivated the workflow.

For now I have added a filter option in my triggered subworkflow. That should filter out everybody who already has a tag indicating interaction.

What is the error message (if any)?

Please share the workflow

Share the output returned by the last node

Information on your n8n setup

  • n8n version: 0.159.0
  • Database you’re using (default: SQLite): SQLite
  • Running n8n with the execution process [own(default), main]: own
  • Running n8n via [Docker, npm, n8n.cloud, desktop app]: Docker

Hi @leprodude, I am sorry to hear you’ve encountered this behaviour. This should of course not have happened.

Could you share some additional details on how you have performed the upgrade? Did you re-create the database as part of the process?

Probably!? I have bumped the version in my Dockerfile:

FROM n8nio/n8n:0.159.0
RUN apk update && apk add --update-cache python3 py3-pip \
	--repository https://alpine.global.ssl.fastly.net/alpine/edge/community \
        --repository https://alpine.global.ssl.fastly.net/alpine/edge/main \
        --repository https://dl-3.alpinelinux.org/alpine/edge/testing
RUN python3 -m pip install --upgrade pip setuptools wheel
RUN apk add --no-cache bash curl libffi-dev gcc python3-dev \
	chromium=81.0.4044.113-r0 \
	chromium-chromedriver=81.0.4044.113-r0
RUN pip3 install selenium==3.141.0 selenium_stealth==1.0.6 PyPDF2==1.26.0

Then, I have updated my docker tag

docker images # get id
docker tag [idhere] n8n-python:0.159.0

Then, I have built the Dockerfile with
docker build -t n8n-python:0.159.0 .

Then, I have bumped the number in my docker-compose.yml:

n8n:
    container_name: n8n
    image: n8n-python:0.159.0
    ports:
      - "127.0.0.1:5678:5678"
    labels:
      - traefik.enable=true
      - traefik.http.routers.n8n.rule=Host(`${SUBDOMAIN}.${DOMAIN_NAME}`)
      - traefik.http.routers.n8n.tls=true
      - traefik.http.routers.n8n.entrypoints=websecure
      - traefik.http.routers.n8n.tls.certresolver=mytlschallenge
      - traefik.http.middlewares.n8n.headers.SSLRedirect=true
      - traefik.http.middlewares.n8n.headers.STSSeconds=315360000
      - traefik.http.middlewares.n8n.headers.browserXSSFilter=true
      - traefik.http.middlewares.n8n.headers.contentTypeNosniff=true
      - traefik.http.middlewares.n8n.headers.forceSTSHeader=true
      - traefik.http.middlewares.n8n.headers.SSLHost=${DOMAIN_NAME}
      - traefik.http.middlewares.n8n.headers.STSIncludeSubdomains=true
      - traefik.http.middlewares.n8n.headers.STSPreload=true
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER
      - N8N_BASIC_AUTH_PASSWORD
      - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - NODE_ENV=production
      - WEBHOOK_TUNNEL_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/
      - VUE_APP_URL_BASE_API=https://${SUBDOMAIN}.${DOMAIN_NAME}/
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ${DATA_FOLDER}:/root/.n8n
      - ../python:/root/python
      - ../selenium:/root/selenium
    restart: always

And then, updated the container with:

# Stop the currently running containers 
docker-compose stop n8n
# Remove them so that it has to recreate them and does not reuse the old ones 
docker-compose rm n8n
# Startup n8n again 
docker-compose up -d

Maybe there’s a better way to update? Not sure if this recreates the database, as it should persist through mounting.

I’ve noticed a similar thing when simply stopping the n8n container. I needed to delete the acme.json cause I had some issues with let’sencrypt, and after restarting the container my active workflow with Airtable Trigger received all 7 rows of data, although the created time was a couple of days older.

Is this sth I need to address myself, if so how? It doesn’t seem like it’s intended behaviour to me…

What’s the value of your DATA_FOLDER environment variable? Does it survive the container recreation here?

DATA_FOLDER=/root/.n8n/ this is in my .env file, and in my docker-compose.yml I mount it under volumes: - ${DATA_FOLDER}:/root/.n8n

Are you asking whether the environment variable stays the same or whether it gets reloaded? Not sure, how to answer that, the folder persists though if that’s what you mean…

Something I’ve noticed is that the active workflow that retriggered with all previously added rows looks like this (Node “Kunde” only provides data for the Airtable Trigger Node):

But I have another active workflow with an Airtable Trigger node that hasn’t refired for all rows. That looks like this:

Don’t know why it could make a difference, but thought I’d add it here anyways…

Facing a similar issue, where the Airtable trigger (for just the latest workflow involving Airtable trigger) triggers and processes already processed data, after n8n restart.

My n8n-
Docker, Postgres, main