Unable to see execution error

Hey there,

I’m running a self-hosted installation of n8n version 1.67.1 on Ubuntu 22.04 on DigitalOcean.

I’m experiencing errors in my workflow - I think those are related to a Craw4AI scraper that I’m using that generates 4MB - 8MB of data at a time

Issue details:

  • When the workflow executes, it fails with errors (as seen in the execution history)
  • One particular node in my workflow is processing large volumes of data
  • When I click “Debug in editor” or try to view the execution details, I’m unable to identify where exactly the error is occurring
  • The error seems to consistently happen at different time intervals (13:04m, 22:16m, 7:53s, etc.)

Questions:

  1. What are the best practices for troubleshooting n8n workflows that fail when handling large data volumes?
  2. Are there any specific logs I should check outside of the n8n interface?
  3. How can I identify memory or performance bottlenecks in my workflow?
  4. Are there recommended configurations or settings to optimize n8n for handling larger datasets?
  5. What information should I provide to help diagnose this issue?

I’ve attached a screenshot of my workflow and execution history for reference.

Any guidance would be greatly appreciated!

Hi, this seems odd. The execution shown in the screenshot shows that none of the nodes actually ran but the workflow ran for almost 30 Min?
Can you check your server where it’s running how memory usage is looking like?
you could use something like htop

apt install htop

What I would do in these cases:

  • check memory usage of the VM
  • logs (dmesg & /var/log/syslog)
  • especially looking for oom_killer
  • check logs of n8n (docker logs …)

hope this helps to get you started.