How to Use n8n Execution Logs to Find Out Why a Workflow Failed

Part 7 of the n8n Workflow Testing Series — this article covers what to do after a workflow fails in production. The earlier articles taught you how to test before going live. This one teaches you how to investigate when something breaks anyway.

Screenshot note: Some images are “n8n-inspired” mockups, not exact screen captures of your n8n environment—but they should still point you in the right direction. Hint, hint, n8n Design Team… I’m just saying, if the UI ever gets this cinematic, I will not complain.


Stop Guessing. Start Looking.

Something broke. A customer did not get their email. A record did not appear in the CRM. A form submission disappeared. You are not sure what happened.

The tempting response is to start guessing. Maybe the API was down. Maybe the data was wrong. Maybe you changed something last week that broke it. So you start checking things at random — the API status page, the CRM, your email sending tool — hoping to stumble onto the answer.

This is slow, frustrating, and unreliable. And almost always unnecessary.

n8n keeps a record of every time a workflow runs. It saves what data came in, what each node did with it, what went out, and what error occurred if something stopped. That record is called the execution log. It is the first place you should look any time a workflow does not behave the way you expected.

The execution log does not require guessing. It shows you exactly what happened.


What Execution Logs Are

Every time an n8n workflow runs — whether it succeeds or fails — n8n saves a snapshot of that run. This snapshot is called an execution.

Inside an execution, you can see:

  • Which nodes ran and in what order
  • What data entered each node
  • What data left each node
  • Where the workflow stopped if something went wrong
  • What the error message said
  • When the workflow started and how long it ran

Think of it like a flight data recorder. You do not need it while everything is going fine. But when something goes wrong, it tells you exactly what happened, step by step, from start to finish.


Why Execution Logs Matter

Without logs, debugging a failed workflow means guessing. With logs, it means reading.

Guessing takes time and often leads you in the wrong direction. You might spend an hour checking an API that had nothing to do with the failure, while the real cause — a blank field three nodes before the error — was sitting in the log the whole time.

Logs also help you do three things that guessing cannot:

Reproduce the failure. The log shows you the exact data that caused the problem. You can take that data, pin it in your staging workflow, and reproduce the failure every time. That makes it much easier to test your fix.

Explain what happened. When a client or teammate asks why something broke, a screenshot of the execution log is worth more than any explanation. It shows exactly what the workflow received and where it stopped.

Prevent the same failure. Once you understand what caused the failure, you can turn it into a test case (covered later in this article), so your testing process catches it next time before it reaches production.


How to Find the Execution Logs

In n8n, your execution history is usually accessible from the main navigation. Look for an Executions section either in the left sidebar or within the workflow itself.

Each execution in the list shows:

  • The workflow name
  • The date and time it ran
  • Whether it succeeded or failed
  • How long it took

To inspect a specific execution, click on it. This opens the full execution view — a read-only version of the workflow showing what happened at each node.

Before publishing: The exact location and layout of the Executions section varies between n8n versions and between n8n Cloud and self-hosted instances. Verify the navigation path in the current version before publishing, and consider adding a screenshot showing the exact location. Also note that execution history retention limits depend on your n8n plan and configuration — older executions may not be available indefinitely.


Reading a Failed Execution

When you open a failed execution, the workflow will show you which nodes ran, which node stopped, and what the error was.

Finding Where It Failed

The node where the workflow stopped will usually be highlighted differently from the nodes that ran successfully — in many versions of n8n, this appears as a red color or a warning icon on the node.

This highlighted node is your starting point — but it is not necessarily where the problem began. The cause of the failure is often one or two nodes earlier. A missing field, a bad value, or an unexpected format may have entered the workflow several steps before the point where things stopped.

Reading Node Input and Output

Click on any node in the execution to see what data went in and what data came out. This is the most useful part of the execution log.

For each node, you can see:

  • Input data — what the node received from the previous node
  • Output data — what the node passed to the next node
  • Error message — if the node failed, what did the error say

Work backwards from the failed node. To do this, click on the node that is immediately before the failed one in the workflow. Check its output tab — that is the data it sent forward. Then click the node before that and check its output again. Keep going until you find the moment where the data looked wrong. That is where the real problem started.

Cross-reference: “Pinning” data is covered in Article 2 of this series. In short, it means saving a node’s output so you can replay it as test input without re-triggering the original event.

Reading the Error Message

The error message is the direct explanation of why the node stopped. Read it carefully.

Some error messages are very clear: “Cannot send email — recipient address is blank.” Others are more technical: “TypeError: Cannot read properties of undefined.” Technical errors are harder to interpret, but they still point you in the right direction. The node type, the field referenced in the error, and the step in the workflow where it happened will usually tell you enough to diagnose the cause.

Write the error message down. Do not rely on memory. You will need it when you document the failure, when you write the fix, and when you create a test case based on what happened.

Reading the Timestamps

The timestamps in an execution tell you when each node ran. In most cases this is not critical — but there are situations where it matters.

If a workflow times out, the timestamps show you how long each step took. If you are troubleshooting a rate limit issue, they help you see how many requests went out in a short window. And if you are trying to connect a failure to something that happened outside n8n — an API going down, a new deployment, a spike in sign-ups — the timestamp tells you exactly when to look.


A Step-by-Step Debugging Process

When a workflow fails, follow these steps rather than guessing:

Step 1: Open the failed execution. Do not touch the production workflow yet. Open the execution log first.

Step 2: Find the node that failed. Note its name and the error message. Write both down.

Step 3: Check that node’s input. Click the failed node and look at what data it received. Does the data look right? Are the expected fields present? Are the values in the right format?

Step 4: Work backwards. If the input looks wrong, go to the node before it and check its output. Keep moving backwards until you find where the data changed or went missing.

Step 5: Identify the root cause. Is it a missing field? A format problem? An API response that changed? A webhook payload that arrived differently than expected? Write down your conclusion.

Step 6: Reproduce it in staging. Take the data from the failed execution — the actual input that caused the problem — and use it as pinned data in your staging workflow (pinned data is covered in Article 2). Open the staging workflow, paste the failing input into the trigger node’s pinned data editor, and run it. Confirm that you can reproduce the failure. This proves you have identified the right cause before you start fixing anything.

Step 7: Fix and retest. Make the fix in staging. Run all your test cases. Confirm the fix works and does not break anything else. Then move the fix to production.

Step 8: Create a test case from the failure. The data that caused this failure is now a test case. Add it to your test case table from Article 3 so it is tested every time going forward.


A Real Example: The Blank Email Field

Let’s say your lead onboarding workflow fails overnight. You get a Slack alert from the error workflow you set up in Article 6. The alert says:

Workflow: PRODUCTION - Lead Onboarding
Failed node: Send Welcome Email
Error: Cannot send message — recipient address is missing
Time: 2024-03-18 02:14:37
Execution ID: 52109

You open execution 52109 in n8n.

The first thing you see is that the workflow ran through the webhook node, the data mapping node, and the CRM lookup — all green. Then it stopped at the Send Welcome Email node — red.

You click on the Send Welcome Email node and look at its input. The email field is there, but its value is blank: "".

You work backwards. You click on the data mapping node and check its output. The email field is blank there, too. You go back one more step to the webhook node and check its output. The webhook received a payload where the email field was blank — it came in that way from the form.

Now you know the root cause: the form allowed a submission without an email address, and the workflow did not catch it before trying to send the email.

The fix: add a validation check (from Article 4) near the beginning of the workflow that stops processing if the email field is blank, and routes the failed record to a review log.

You pin the blank-email payload in staging, reproduce the failure, add the validation node, retest, and confirm the fix works. Then you add “blank email field” to your test case table with the expected result: “workflow stops and logs the issue.”

The same failure will not reach production again.


Turning a Failure Into a Test Case

Every production failure is a test case you did not know you needed.

When something breaks in production, the data that caused it is real. It is more useful than any made-up test case because it represents an actual situation your workflow will face again.

Before you fix the failure and move on, capture the failing input. Write it down or copy it from the execution log. Then:

  1. Add it to your test case table as a new row
  2. Write the scenario name — for example, “Form submission with blank email”
  3. Write the expected result — “Workflow stops and logs the issue, no email attempted”
  4. Pin the data in staging and confirm the fix handles it correctly
  5. Mark the test case as Pass once the fix is verified

From this point on, every time you test the workflow, this case is included. The same failure cannot sneak back in without being caught.


Common Mistakes Beginners Make

Guessing instead of checking logs. This is the most common debugging mistake and the easiest to fix. When something breaks, open the execution log before doing anything else. The answer is almost always there.

Only reading the final error message. The error message tells you where the workflow stopped, but not always why. The real cause is often upstream — in the input data that reached the failing node, or in an output from an earlier node that quietly passed the wrong value along.

Ignoring earlier nodes. When a node fails, beginners tend to focus entirely on that node. Click backwards. Check the outputs of the nodes before it. The problem usually started before the error appeared.

Not saving evidence. Screenshots of the execution log, the error message, the input data, and the node where the failure happened are valuable. Save them before you start fixing anything. If you close the execution, you may not be able to recreate that exact view, especially if execution history is limited by your plan.

Not turning the failure into a test case. You fix the problem. You move on. Three months later, a slightly different version of the same input causes the same failure again. If you had turned it into a test case, the regression test would have caught it. Every production failure should become a permanent test case.

Deleting or losing execution history too soon. n8n’s execution history is your investigation tool. Depending on your plan and your configuration, older executions may be deleted automatically after a certain period. If you need to reference an execution later — for a client conversation, a post-mortem, or a recurring bug — you may find it is gone. Save key execution details in your documentation log while you still have access.


Document Your Results

Every time you investigate a failed execution, write it up. This does not have to be long. But it should be complete enough that someone else — or future you — can understand what happened, what caused it, and what was done to fix it.

Here is a template you can use:


EXECUTION FAILURE LOG

Workflow name: [Your Workflow Name]
Execution ID: [The execution ID from n8n]
Date of failure: [Date and time]
Discovered by: [Alert / Client report / Manual check / Other]
Investigated by: [Your name]
Date investigated: [Date]

---

Failed node: [Name of the node where the workflow stopped]
Error message: [Exact text of the error message]
Timestamp of failure: [When the workflow ran]

Root cause:
[Describe what actually caused the failure in plain English. 
For example: "The form allowed submission without an email 
address. The workflow did not validate the email field 
before attempting to send the welcome email."]

Data that triggered the failure:
[Describe the input — do not paste sensitive personal data, 
API keys, or private information here. Describe the relevant 
fields and what was wrong with their values.]

Fix applied:
[Describe what you changed — for example: "Added IF node 
after webhook to check that email field is not blank. 
Routes failed records to review log."]

Fix tested in staging? YES / NO
Fix verified in production? YES / NO

New test case added to test case table? YES / NO
Test case ID: [e.g. TC-07]

---

Notes:
[Anything else worth recording — timeline, who was 
affected, client communication, follow-up items]

Keep this log alongside your staging test logs, test case table, and error workflow documentation. Together they build a complete history of how your workflow has behaved — and how it has improved over time.


Use AI to Help

AI tools can help you interpret error messages and figure out where to look next. This is especially useful when the error message is technical and hard to understand.

Here is a prompt you can use:


I have an n8n workflow that failed. Here are the safe 
details from the execution log:

Workflow description:
[Describe what your workflow does in plain English]

Failed node: [Node name and node type — e.g. "Send Email — Gmail node"]

Error message: [Paste the exact error message text here]

What the failed node's input data looked like:
[Describe the relevant fields and values — use placeholder 
names like "[CUSTOMER_NAME]" or "[ORDER_ID]" instead of 
real personal data]

What I expected to happen at that node:
[Describe what you thought the node would do]

Please:
1. Explain what the error message most likely means 
   in plain English.
2. Suggest two or three possible causes of this failure.
3. Tell me what to look at in the nodes before the 
   failed node to trace the root cause.
4. Suggest what to check or test to confirm the fix works.

This prompt gives the AI enough context to be useful without sharing anything sensitive. You describe the data rather than pasting it, and you use placeholder names instead of real customer details.

:warning: Privacy Warning: What Not to Paste Into AI Tools

When using AI to help debug a workflow failure, be careful about what you share. It is easy to copy the full contents of an execution output and paste it into a chat window without thinking about what is in it.

Do not paste the following into any AI tool:

  • Real customer names, emails, phone numbers, or addresses
  • Order IDs or account numbers that could identify a real person
  • API keys, access tokens, or passwords — even partial ones
  • Authentication headers or webhook secrets
  • Any data that is covered by a privacy policy or data agreement

Instead, describe the data in general terms. Say “the email field was blank” rather than pasting the actual payload. Say “the customer ID field contained an unexpected format” rather than sharing a real ID. Use placeholder names like [CUSTOMER_NAME] or [ORDER_AMOUNT] when you need to illustrate the structure.

The AI does not need real data to help you interpret an error message or find a root cause. A clear description is enough.

:rocket: Zero to Hero Tip: Ask AI to Turn the Failure Into a Test Case

Once you have identified the root cause, ask the AI to help you turn it into a proper test case:

A production workflow I built failed because of the 
following root cause:
[Describe the root cause in plain English]

The workflow does the following:
[Describe your workflow briefly]

Please write a test case for this failure in the following 
format:
- Test Case ID: [suggest one]
- Scenario: [name the scenario]
- Input / Condition: [describe the input that causes the failure]
- Expected Result: [what the fixed workflow should do]
- Notes: [anything else worth recording]

Also suggest two similar edge cases I should add to my 
test suite to catch related failures.

This takes a failure that happened once and turns it into permanent protection. The AI will usually suggest related edge cases you had not thought of — similar inputs that would cause the same or a related problem.


Production Readiness Checklist

This checklist is slightly different from the others in the series. It is not just about setup — it is about having the discipline to use execution logs properly when something goes wrong.

  • I know where to find the Executions section in n8n
  • I can open a failed execution and navigate to the node that stopped
  • I know how to check node input and output inside an execution
  • I understand how to work backwards from a failed node to find the root cause
  • I have a process for saving execution evidence before investigating
  • I know how long my plan or configuration retains execution history
  • When something breaks, I check the execution log before guessing
  • I turn every production failure into a test case in my test case table
  • I document failures using the execution failure log template
  • I do not paste real customer data or credentials into AI tools

Production Readiness Question: If a workflow failure happened right now and I needed to explain exactly what went wrong and why — could I do that using only the execution log?

If the answer is no, practice navigating executions now, before the next failure happens.


Final Takeaway

Execution logs are not a feature for advanced users. They are the first tool any builder should reach for when something breaks.

The habit is simple: when a workflow fails, open the execution before you do anything else. Read what happened at each node. Find where the data went wrong. Work backwards until you know the cause. Then fix it, test it, and add it to your test cases so it never reaches production again.

Every production failure is information. The execution log is where that information lives. Use it.


Next in the series: Article 8 — How to Use a Four-Part Testing Setup in n8n

3 Likes

Hi @Exnav29 :waving_hand: thanks again for this; I really like this part of the series.

I completely agree with the Privacy Warning: What Not to Paste Into AI Tools section. For development and testing, I also prefer using real but not actual data:
Simulated data that behaves like production input, but without exposing sensitive information. That way we can still test the workflow realistically without risking private content.

For local LLM debugging, the server logs can be extremely helpful. They often reveal what the model actually received and how it behaved, which can save a lot of time when the workflow itself looks correct in n8n.

This article is really practical and helpful, thanks for sharing it.

1 Like