What does "no execution data" mean?

What does “No Execution Data Available” mean?

This workflow has been working for some time now. Recently, it started failing due to too many requests, so it’s time to properly implement pagination in the HTTP node.

I set up the parameters and run the workflow.

What is the error message (if any)?

“No execution data available”

This is the content of “Error Details.”

{
  "errorMessage": "No execution data available",
  "errorDetails": {
    "rawErrorMessage": [
      "No execution data available"
    ]
  },
  "n8nDetails": {
    "nodeName": "Get Some",
    "nodeType": "n8n-nodes-base.httpRequest",
    "nodeVersion": 4.2,
    "itemIndex": 0,
    "runIndex": 0,
    "time": "9/4/2024, 5:56:47 PM",
    "n8nVersion": "1.56.2 (Self Hosted)",
    "binaryDataMode": "default",
    "stackTrace": [
      "NodeApiError: No execution data available",
      "    at Object.execute (/usr/local/lib/node_modules/n8n/node_modules/n8n-nodes-base/dist/nodes/HttpRequest/V3/HttpRequestV3.node.js:1650:33)",
      "    at processTicksAndRejections (node:internal/process/task_queues:95:5)",
      "    at Workflow.runNode (/usr/local/lib/node_modules/n8n/node_modules/n8n-workflow/dist/Workflow.js:728:19)",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:673:51",
      "    at /usr/local/lib/node_modules/n8n/node_modules/n8n-core/dist/WorkflowExecute.js:1104:20"
    ]
  }
}

Please share your workflow

Share the output returned by the last node

I get no output. What I expect is a detailed list of computers from the Mosyle platform.

Information on your n8n setup

  • n8n version: 1.56.2
  • Database (default: SQLite): PostgresQL 15
  • n8n EXECUTIONS_PROCESS setting (default: own, main): main
  • Running n8n via (Docker, npm, n8n cloud, desktop app): In docker, on Digital Ocearn
  • Operating system: Ubuntu 22.04 LTS

Hi @russellkg

Can it be that your workflow is having memory issues? Normally with paging it is important to keep the memory usage in check by using subflows for example. Of course you can do it in one flow, but then you need to be pretty sure the data is not going to be too much for the server to handle in one go.
If the workflow/server crashes because of memory issues it is not able to log the execution making it you have no data when you try to view it.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.