Built a Virtual Data Team that sends automated daily reports to Slack — no analyst needed

What I built

A fully automated data reporting pipeline I’m calling Virtual Data Team (VDT).

It runs every morning at 8AM, pulls data from Google Sheets, runs 5 analytical
modules, and delivers a complete daily report to a Slack channel — automatically.

No analyst. No manual work. Just results in Slack every morning.


The 5-module pipeline

  • Module 1 — Pulls raw data from Google Sheets source
  • Module 2 — Cleans and validates the data
  • Module 3 — Runs 3 parallel analysis streams (anomaly detection,
    data quality, dashboard prep)
  • Module 4 — Delivers output to Slack + logs to Google Sheets
  • Module 5 — Creates a new dated Google Sheet, moves it to Drive folder,
    writes clean data, anomalies, handling log, and dashboard rows

Triggers

Dual trigger setup:

  • :alarm_clock: Schedule — 8AM Monday–Friday
  • :clipboard: Form Trigger — run any pipeline on-demand via form selection
  • Both merge into a single Config node via Merge (Append mode)

Stack

  • n8n (self-hosted)
  • Google Sheets + Google Drive
  • Slack
  • Google Forms (on-demand trigger)

Workflow JSON

</>[{
  "name": "Virtual Data Team (VDT) — Automated Daily Reporting Pipeline",
  "nodes": [
    {
      "parameters": {
        "rule": {
          "interval": [
            {
              "field": "cronExpression",
              "expression": "0 8 * * 1-5"
            }
          ]
        }
      },
      "id": "node-schedule-trigger",
      "name": "⏰ Schedule Trigger",
      "type": "n8n-nodes-base.scheduleTrigger",
      "typeVersion": 1.1,
      "position": [240, 300],
      "notes": "Runs Mon–Fri at 8:00 AM"
    },
    {
      "parameters": {
        "path": "vdt-form-trigger",
        "formTitle": "Run VDT Pipeline",
        "formDescription": "Select which pipeline to run on demand",
        "formFields": {
          "values": [
            {
              "fieldLabel": "Pipeline",
              "fieldType": "dropdown",
              "fieldOptions": {
                "values": [
                  { "option": "All Pipelines" },
                  { "option": "Air Quality" },
                  { "option": "Sales" }
                ]
              },
              "requiredField": true
            }
          ]
        },
        "responseMode": "onReceived",
        "responseText": "Pipeline triggered! Check your Slack for results."
      },
      "id": "node-form-trigger",
      "name": "📋 Form Trigger",
      "type": "n8n-nodes-base.formTrigger",
      "typeVersion": 2,
      "position": [240, 480],
      "webhookId": "YOUR-WEBHOOK-ID-HERE"
    },
    {
      "parameters": {
        "mode": "append",
        "clashHandling": {
          "values": {
            "resolveClash": "addToEnd"
          }
        }
      },
      "id": "node-merge-triggers",
      "name": "🔀 Merge Triggers",
      "type": "n8n-nodes-base.merge",
      "typeVersion": 2.1,
      "position": [460, 380]
    },
    {
      "parameters": {
        "jsCode": "// ── VDT CONFIG NODE ──\n// Edit PIPELINES array to add/remove pipelines\n// Set enabled: false to skip a pipeline without deleting it\n\nconst PIPELINES = [\n  {\n    id: 'air_quality',\n    name: 'Air Quality Analysis',\n    enabled: true,\n    sheetId: 'YOUR-SOURCE-SHEET-ID',\n    sheetTab: 'city_day',\n    slackChannel: '#your-slack-channel'\n  }\n  // Add more pipelines here\n];\n\n// Check if triggered by form (on-demand) or schedule\nconst formData = $input.first().json?.formData;\nconst selectedPipeline = formData?.Pipeline || 'All Pipelines';\n\n// Filter pipelines based on form selection or run all enabled\nlet activePipelines;\nif (selectedPipeline === 'All Pipelines') {\n  activePipelines = PIPELINES.filter(p => p.enabled);\n} else {\n  activePipelines = PIPELINES.filter(p => \n    p.enabled && p.name === selectedPipeline\n  );\n}\n\nif (activePipelines.length === 0) {\n  throw new Error('No active pipelines found for selection: ' + selectedPipeline);\n}\n\n// Return config for downstream modules\nreturn activePipelines.map(pipeline => ({\n  json: {\n    pipeline,\n    runDate: new Date().toISOString().split('T')[0],\n    runTimestamp: new Date().toISOString(),\n    triggeredBy: formData ? 'form' : 'schedule'\n  }\n}));"
      },
      "id": "node-config",
      "name": "⚙️ Config",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [680, 380]
    },
    {
      "parameters": {
        "operation": "read",
        "documentId": {
          "mode": "expression",
          "value": "={{ $json.pipeline.sheetId }}"
        },
        "sheetName": {
          "mode": "expression",
          "value": "={{ $json.pipeline.sheetTab }}"
        },
        "options": {
          "headerRow": 1
        }
      },
      "id": "node-module1-read",
      "name": "📥 Module 1 — Read Source Data",
      "type": "n8n-nodes-base.googleSheets",
      "typeVersion": 4.4,
      "position": [900, 380],
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "YOUR-GSHEETS-CREDENTIAL-ID",
          "name": "Google Sheets account"
        }
      }
    },
    {
      "parameters": {
        "jsCode": "// ── MODULE 2: DATA CLEANING ──\n// Cleans raw data from source sheet\n// Removes nulls, trims whitespace, type-casts numeric fields\n\nconst rawRows = $input.all();\nconst config = rawRows[0].json;\n\nconst cleanRows = rawRows.map(item => {\n  const row = { ...item.json };\n\n  // Remove empty/null values\n  Object.keys(row).forEach(key => {\n    if (row[key] === '' || row[key] === null || row[key] === undefined) {\n      delete row[key];\n    }\n  });\n\n  // Trim string values\n  Object.keys(row).forEach(key => {\n    if (typeof row[key] === 'string') {\n      row[key] = row[key].trim();\n    }\n  });\n\n  // Cast numeric fields\n  const numericFields = ['PM2.5', 'PM10', 'NO', 'NO2', 'NOx', 'NH3',\n    'CO', 'SO2', 'O3', 'Benzene', 'Toluene', 'Xylene', 'AQI'];\n  numericFields.forEach(field => {\n    if (row[field] !== undefined) {\n      row[field] = parseFloat(row[field]) || 0;\n    }\n  });\n\n  return { json: row };\n});\n\nreturn cleanRows;"
      },
      "id": "node-module2-clean",
      "name": "🔧 Module 2 — Clean Data",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [1120, 380]
    },
    {
      "parameters": {
        "jsCode": "// ── MODULE 3A: ANOMALY DETECTION ──\n// Detects statistical outliers using IQR method\n// Flags rows where AQI or pollutant values are extreme\n\nconst rows = $input.all().map(i => i.json);\nconst anomalies = [];\n\n// Calculate IQR for AQI\nconst aqiValues = rows\n  .map(r => parseFloat(r.AQI))\n  .filter(v => !isNaN(v))\n  .sort((a, b) => a - b);\n\nconst q1 = aqiValues[Math.floor(aqiValues.length * 0.25)];\nconst q3 = aqiValues[Math.floor(aqiValues.length * 0.75)];\nconst iqr = q3 - q1;\nconst lowerBound = q1 - 1.5 * iqr;\nconst upperBound = q3 + 1.5 * iqr;\n\nrows.forEach(row => {\n  const aqi = parseFloat(row.AQI);\n  if (!isNaN(aqi) && (aqi < lowerBound || aqi > upperBound)) {\n    anomalies.push({\n      city: row.City,\n      date: row.Date,\n      AQI: aqi,\n      reason: aqi > upperBound ? 'AQI spike detected' : 'AQI unusually low',\n      threshold: `IQR bounds: ${lowerBound.toFixed(1)} – ${upperBound.toFixed(1)}`\n    });\n  }\n});\n\nreturn [{ json: { anomalies, count: anomalies.length, bounds: { q1, q3, iqr, lowerBound, upperBound } } }];"
      },
      "id": "node-module3a-anomaly",
      "name": "🔍 Module 3A — Anomaly Detection",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [1340, 200]
    },
    {
      "parameters": {
        "jsCode": "// ── MODULE 3B: DATA HANDLING LOG ──\n// Tracks data quality issues: missing values, duplicates, type errors\n\nconst rows = $input.all().map(i => i.json);\nconst log = [];\n\nconst requiredFields = ['City', 'Date', 'AQI', 'AQI_Bucket'];\n\nrows.forEach((row, idx) => {\n  requiredFields.forEach(field => {\n    if (!row[field] || row[field] === '') {\n      log.push({\n        rowIndex: idx + 2, // +2 for header row and 0-index\n        field,\n        issue: 'Missing required value',\n        value: row[field] ?? 'null'\n      });\n    }\n  });\n\n  // Check AQI is a valid number\n  if (row.AQI && isNaN(parseFloat(row.AQI))) {\n    log.push({\n      rowIndex: idx + 2,\n      field: 'AQI',\n      issue: 'Non-numeric value in numeric field',\n      value: row.AQI\n    });\n  }\n});\n\n// Check for duplicate City+Date combos\nconst seen = new Set();\nrows.forEach((row, idx) => {\n  const key = `${row.City}|${row.Date}`;\n  if (seen.has(key)) {\n    log.push({ rowIndex: idx + 2, field: 'City+Date', issue: 'Duplicate row detected', value: key });\n  }\n  seen.add(key);\n});\n\nreturn [{ json: { handlingLog: log, issueCount: log.length } }];"
      },
      "id": "node-module3b-handling",
      "name": "📋 Module 3B — Data Handling Log",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [1340, 380]
    },
    {
      "parameters": {
        "jsCode": "// ── MODULE 3C: DASHBOARD SUMMARY ──\n// Builds summary stats for dashboard tab\n// Aggregates by city: avg AQI, max AQI, AQI bucket distribution\n\nconst rows = $input.all().map(i => i.json);\n\n// Group by city\nconst cityMap = {};\nrows.forEach(row => {\n  const city = row.City || 'Unknown';\n  if (!cityMap[city]) {\n    cityMap[city] = { city, aqiValues: [], buckets: {}, count: 0 };\n  }\n  const aqi = parseFloat(row.AQI);\n  if (!isNaN(aqi)) cityMap[city].aqiValues.push(aqi);\n  const bucket = row.AQI_Bucket || 'Unknown';\n  cityMap[city].buckets[bucket] = (cityMap[city].buckets[bucket] || 0) + 1;\n  cityMap[city].count++;\n});\n\n// Build dashboard rows\nconst dashboardRows = Object.values(cityMap).map(c => ({\n  City: c.city,\n  TotalRecords: c.count,\n  AvgAQI: c.aqiValues.length\n    ? (c.aqiValues.reduce((a, b) => a + b, 0) / c.aqiValues.length).toFixed(1)\n    : 'N/A',\n  MaxAQI: c.aqiValues.length ? Math.max(...c.aqiValues) : 'N/A',\n  MinAQI: c.aqiValues.length ? Math.min(...c.aqiValues) : 'N/A',\n  MostCommonBucket: Object.entries(c.buckets).sort((a, b) => b[1] - a[1])[0]?.[0] || 'N/A'\n}));\n\nreturn [{ json: { dashboardRows, cityCount: dashboardRows.length } }];"
      },
      "id": "node-module3c-dashboard",
      "name": "📊 Module 3C — Dashboard Summary",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [1340, 560]
    },
    {
      "parameters": {
        "mode": "append",
        "clashHandling": {
          "values": {
            "resolveClash": "addToEnd"
          }
        }
      },
      "id": "node-merge-modules",
      "name": "🔀 Merge Module Outputs",
      "type": "n8n-nodes-base.merge",
      "typeVersion": 2.1,
      "position": [1560, 380]
    },
    {
      "parameters": {
        "jsCode": "// ── MODULE 4: FORMAT & DELIVER ──\n// Builds Slack message and run log entry from all module outputs\n\nconst inputs = $input.all().map(i => i.json);\n\n// Collect outputs from all 3 modules\nconst anomalyData = inputs.find(i => i.anomalies !== undefined) || { anomalies: [], count: 0 };\nconst handlingData = inputs.find(i => i.handlingLog !== undefined) || { handlingLog: [], issueCount: 0 };\nconst dashboardData = inputs.find(i => i.dashboardRows !== undefined) || { dashboardRows: [], cityCount: 0 };\n\nconst runDate = new Date().toISOString().split('T')[0];\nconst sheetId = $('🆕 Create Google Sheet').first().json.spreadsheetId;\n\n// Build Slack message\nconst slackMessage = [\n  `*📊 VDT Daily Report — ${runDate}*`,\n  ``,\n  `*Cities analyzed:* ${dashboardData.cityCount}`,\n  `*Anomalies detected:* ${anomalyData.count}`,\n  `*Data quality issues:* ${handlingData.issueCount}`,\n  ``,\n  anomalyData.count > 0\n    ? `⚠️ *Top anomaly:* ${anomalyData.anomalies[0]?.city} — AQI ${anomalyData.anomalies[0]?.AQI} (${anomalyData.anomalies[0]?.reason})`\n    : `✅ No anomalies detected today`,\n  ``,\n  `📁 <https://docs.google.com/spreadsheets/d/${sheetId}|Open Full Report in Google Sheets>`\n].join('\\n');\n\n// Build run log entry\nconst runLog = {\n  Date: runDate,\n  Status: 'Success',\n  CitiesAnalyzed: dashboardData.cityCount,\n  AnomaliesFound: anomalyData.count,\n  DataIssues: handlingData.issueCount,\n  SheetId: sheetId,\n  Timestamp: new Date().toISOString()\n};\n\nreturn [{ json: { slackMessage, runLog, sheetId } }];"
      },
      "id": "node-module4-format",
      "name": "📝 Module 4 — Format & Deliver",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [1780, 380]
    },
    {
      "parameters": {
        "select": "channel",
        "channelId": {
          "mode": "expression",
          "value": "YOUR-SLACK-CHANNEL-ID"
        },
        "text": "={{ $json.slackMessage }}",
        "otherOptions": {
          "mrkdwn": true
        }
      },
      "id": "node-slack-send",
      "name": "💬 Send to Slack",
      "type": "n8n-nodes-base.slack",
      "typeVersion": 2.1,
      "position": [2000, 260],
      "credentials": {
        "slackApi": {
          "id": "YOUR-SLACK-CREDENTIAL-ID",
          "name": "Slack account"
        }
      }
    },
    {
      "parameters": {
        "operation": "append",
        "documentId": "YOUR-RUN-LOG-SHEET-ID",
        "sheetName": "Sheet1",
        "columns": {
          "mappingMode": "defineBelow",
          "value": {
            "Date": "={{ $json.runLog.Date }}",
            "Status": "={{ $json.runLog.Status }}",
            "CitiesAnalyzed": "={{ $json.runLog.CitiesAnalyzed }}",
            "AnomaliesFound": "={{ $json.runLog.AnomaliesFound }}",
            "DataIssues": "={{ $json.runLog.DataIssues }}",
            "SheetId": "={{ $json.runLog.SheetId }}",
            "Timestamp": "={{ $json.runLog.Timestamp }}"
          }
        }
      },
      "id": "node-log-sheet",
      "name": "📒 Log to Run Sheet",
      "type": "n8n-nodes-base.googleSheets",
      "typeVersion": 4.4,
      "position": [2000, 500],
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "YOUR-GSHEETS-CREDENTIAL-ID",
          "name": "Google Sheets account"
        }
      }
    },
    {
      "parameters": {
        "method": "POST",
        "url": "https://sheets.googleapis.com/v4/spreadsheets",
        "authentication": "predefinedCredentialType",
        "nodeCredentialType": "googleSheetsOAuth2Api",
        "sendBody": true,
        "specifyBody": "json",
        "jsonBody": "={\n  \"properties\": {\n    \"title\": \"VDT Report — {{ $('⚙️ Config').first().json.runDate }}\"\n  },\n  \"sheets\": [\n    { \"properties\": { \"title\": \"Clean Data\" } },\n    { \"properties\": { \"title\": \"Anomalies\" } },\n    { \"properties\": { \"title\": \"Data Handling Log\" } },\n    { \"properties\": { \"title\": \"Dashboard\" } }\n  ]\n}",
        "options": {}
      },
      "id": "node-create-sheet",
      "name": "🆕 Create Google Sheet",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [2000, 380],
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "YOUR-GSHEETS-CREDENTIAL-ID",
          "name": "Google Sheets account"
        }
      }
    },
    {
      "parameters": {
        "method": "PATCH",
        "url": "=https://www.googleapis.com/drive/v3/files/{{ $json.spreadsheetId }}",
        "authentication": "predefinedCredentialType",
        "nodeCredentialType": "googleDriveOAuth2Api",
        "sendQuery": true,
        "queryParameters": {
          "parameters": [
            {
              "name": "addParents",
              "value": "YOUR-DRIVE-FOLDER-ID"
            },
            {
              "name": "removeParents",
              "value": "root"
            }
          ]
        },
        "options": {}
      },
      "id": "node-move-to-drive",
      "name": "📁 Move to Drive Folder",
      "type": "n8n-nodes-base.httpRequest",
      "typeVersion": 4.2,
      "position": [2220, 380],
      "credentials": {
        "googleDriveOAuth2Api": {
          "id": "YOUR-GDRIVE-CREDENTIAL-ID",
          "name": "Google Drive account"
        }
      }
    },
    {
      "parameters": {
        "jsCode": "// Unpack Clean Data rows for Google Sheets append\nconst sheetId = $('🆕 Create Google Sheet').first().json.spreadsheetId;\nconst cleanRows = $('🔧 Module 2 — Clean Data').all().map(i => i.json);\nreturn cleanRows.map(row => ({ json: { ...row, _sheetId: sheetId } }));"
      },
      "id": "node-unpack-clean",
      "name": "📦 Unpack Clean Data",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [2440, 200]
    },
    {
      "parameters": {
        "jsCode": "// Unpack Anomaly rows for Google Sheets append\nconst sheetId = $('🆕 Create Google Sheet').first().json.spreadsheetId;\nconst anomalies = $('🔍 Module 3A — Anomaly Detection').first().json.anomalies;\nreturn anomalies.map(row => ({ json: { ...row, _sheetId: sheetId } }));"
      },
      "id": "node-unpack-anomalies",
      "name": "📦 Unpack Anomalies",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [2440, 340]
    },
    {
      "parameters": {
        "jsCode": "// Unpack Handling Log rows for Google Sheets append\nconst sheetId = $('🆕 Create Google Sheet').first().json.spreadsheetId;\nconst log = $('📋 Module 3B — Data Handling Log').first().json.handlingLog;\nreturn log.map(row => ({ json: { ...row, _sheetId: sheetId } }));"
      },
      "id": "node-unpack-handling",
      "name": "📦 Unpack Handling Log",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [2440, 480]
    },
    {
      "parameters": {
        "jsCode": "// Unpack Dashboard rows for Google Sheets append\nconst sheetId = $('🆕 Create Google Sheet').first().json.spreadsheetId;\nconst dashRows = $('📊 Module 3C — Dashboard Summary').first().json.dashboardRows;\nreturn dashRows.map(row => ({ json: { ...row, _sheetId: sheetId } }));"
      },
      "id": "node-unpack-dashboard",
      "name": "📦 Unpack Dashboard",
      "type": "n8n-nodes-base.code",
      "typeVersion": 2,
      "position": [2440, 620]
    },
    {
      "parameters": {
        "operation": "append",
        "documentId": {
          "mode": "expression",
          "value": "={{ $json._sheetId }}"
        },
        "sheetName": "Clean Data",
        "columns": { "mappingMode": "autoMapInputData" },
        "options": { "headerRow": 1 }
      },
      "id": "node-write-clean",
      "name": "✍️ Write Clean Data",
      "type": "n8n-nodes-base.googleSheets",
      "typeVersion": 4.4,
      "position": [2660, 200],
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "YOUR-GSHEETS-CREDENTIAL-ID",
          "name": "Google Sheets account"
        }
      }
    },
    {
      "parameters": {
        "operation": "append",
        "documentId": {
          "mode": "expression",
          "value": "={{ $json._sheetId }}"
        },
        "sheetName": "Anomalies",
        "columns": { "mappingMode": "autoMapInputData" },
        "options": { "headerRow": 1 }
      },
      "id": "node-write-anomalies",
      "name": "✍️ Write Anomalies",
      "type": "n8n-nodes-base.googleSheets",
      "typeVersion": 4.4,
      "position": [2660, 340],
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "YOUR-GSHEETS-CREDENTIAL-ID",
          "name": "Google Sheets account"
        }
      }
    },
    {
      "parameters": {
        "operation": "append",
        "documentId": {
          "mode": "expression",
          "value": "={{ $json._sheetId }}"
        },
        "sheetName": "Data Handling Log",
        "columns": { "mappingMode": "autoMapInputData" },
        "options": { "headerRow": 1 }
      },
      "id": "node-write-handling",
      "name": "✍️ Write Handling Log",
      "type": "n8n-nodes-base.googleSheets",
      "typeVersion": 4.4,
      "position": [2660, 480],
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "YOUR-GSHEETS-CREDENTIAL-ID",
          "name": "Google Sheets account"
        }
      }
    },
    {
      "parameters": {
        "operation": "append",
        "documentId": {
          "mode": "expression",
          "value": "={{ $json._sheetId }}"
        },
        "sheetName": "Dashboard",
        "columns": { "mappingMode": "autoMapInputData" },
        "options": { "headerRow": 1 }
      },
      "id": "node-write-dashboard",
      "name": "✍️ Write Dashboard",
      "type": "n8n-nodes-base.googleSheets",
      "typeVersion": 4.4,
      "position": [2660, 620],
      "credentials": {
        "googleSheetsOAuth2Api": {
          "id": "YOUR-GSHEETS-CREDENTIAL-ID",
          "name": "Google Sheets account"
        }
      }
    }
  ],
  "connections": {
    "⏰ Schedule Trigger": {
      "main": [[{ "node": "🔀 Merge Triggers", "type": "main", "index": 0 }]]
    },
    "📋 Form Trigger": {
      "main": [[{ "node": "🔀 Merge Triggers", "type": "main", "index": 1 }]]
    },
    "🔀 Merge Triggers": {
      "main": [[{ "node": "⚙️ Config", "type": "main", "index": 0 }]]
    },
    "⚙️ Config": {
      "main": [[{ "node": "📥 Module 1 — Read Source Data", "type": "main", "index": 0 }]]
    },
    "📥 Module 1 — Read Source Data": {
      "main": [[{ "node": "🔧 Module 2 — Clean Data", "type": "main", "index": 0 }]]
    },
    "🔧 Module 2 — Clean Data": {
      "main": [[
        { "node": "🔍 Module 3A — Anomaly Detection", "type": "main", "index": 0 },
        { "node": "📋 Module 3B — Data Handling Log", "type": "main", "index": 0 },
        { "node": "📊 Module 3C — Dashboard Summary", "type": "main", "index": 0 }
      ]]
    },
    "🔍 Module 3A — Anomaly Detection": {
      "main": [[{ "node": "🔀 Merge Module Outputs", "type": "main", "index": 0 }]]
    },
    "📋 Module 3B — Data Handling Log": {
      "main": [[{ "node": "🔀 Merge Module Outputs", "type": "main", "index": 1 }]]
    },
    "📊 Module 3C — Dashboard Summary": {
      "main": [[{ "node": "🔀 Merge Module Outputs", "type": "main", "index": 2 }]]
    },
    "🔀 Merge Module Outputs": {
      "main": [[{ "node": "📝 Module 4 — Format & Deliver", "type": "main", "index": 0 }]]
    },
    "📝 Module 4 — Format & Deliver": {
      "main": [[
        { "node": "💬 Send to Slack", "type": "main", "index": 0 },
        { "node": "🆕 Create Google Sheet", "type": "main", "index": 0 },
        { "node": "📒 Log to Run Sheet", "type": "main", "index": 0 }
      ]]
    },
    "🆕 Create Google Sheet": {
      "main": [[{ "node": "📁 Move to Drive Folder", "type": "main", "index": 0 }]]
    },
    "📁 Move to Drive Folder": {
      "main": [[
        { "node": "📦 Unpack Clean Data", "type": "main", "index": 0 },
        { "node": "📦 Unpack Anomalies", "type": "main", "index": 0 },
        { "node": "📦 Unpack Handling Log", "type": "main", "index": 0 },
        { "node": "📦 Unpack Dashboard", "type": "main", "index": 0 }
      ]]
    },
    "📦 Unpack Clean Data": {
      "main": [[{ "node": "✍️ Write Clean Data", "type": "main", "index": 0 }]]
    },
    "📦 Unpack Anomalies": {
      "main": [[{ "node": "✍️ Write Anomalies", "type": "main", "index": 0 }]]
    },
    "📦 Unpack Handling Log": {
      "main": [[{ "node": "✍️ Write Handling Log", "type": "main", "index": 0 }]]
    },
    "📦 Unpack Dashboard": {
      "main": [[{ "node": "✍️ Write Dashboard", "type": "main", "index": 0 }]]
    }
  },
  "settings": {
    "executionOrder": "v1",
    "saveManualExecutions": true,
    "callerPolicy": "workflowsFromSameOwner",
    "errorWorkflow": ""
  },
  "tags": ["vdt", "automation", "reporting", "google-sheets", "slack"],
  "meta": {
    "templateCredsSetupCompleted": false,
    "instanceId": "community-share"
  }
}]

What’s next

Turning this into a managed service for D2C brands in India.
Live at: Virtual Data Team — Automated Analytics for Growing Businesses

Happy to answer any questions or hear feedback from the community!

really solid modular setup — the dual trigger merging into a single config node is a pattern i keep coming back to for scheduled workflows that also need on-demand runs. curious how the IQR anomaly detection holds up on real data, do you get a lot of false positives or does the air quality data happen to be pretty clean?

Hey Benjamin, glad the dual trigger pattern resonated! On the IQR side. We’re still testing it across various larger datasets to properly measure the false positive rate. Seasonal spikes like festivals and crop burning season do cause some false positives, which is a known limitation. Curious if you’ve handled similar seasonality issues in your workflows?

yeah seasonal spikes are tricky with IQR — crop burning season would totally blow out your upper bound for that period. what weve found helps is switching to a rolling 30-day baseline instead of global, so the model adjusts to whatever season you’re in. not a perfect fix but it handles the drift a lot better than a static global baseline