Built a CTO monitoring cockpit with n8n + Supabase + GitHub Pages — all free, sharing the workflows

Been using n8n for about a year and finally put together something worth sharing. I built a production monitoring dashboard for my solo SaaS that runs entirely on n8n (self-hosted on Render) + Supabase + GitHub Pages. Total monthly cost: $0. **The two workflows:** **WF-01: Ingestion Pipeline** Webhook → Extract Content → Filter noise → Claude API classification → Parse JSON → Supabase insert → Trigger cross-linking Takes any content (Slack messages, GitHub events, manual notes) and automatically extracts tags, entities, a summary, and a narrative role. The Claude classification prompt is the interesting bit — it’s designed to extract structured metadata, not just summarize. Output is clean JSON that goes straight into Postgres. **WF-04: Timeline Engine** Manual trigger → Fetch all entries → Temporal clustering → Narrative arc detection → Batch update Scans the entire metadata table, parses `date_estimated` fields, and assigns each entry to a time period. Then does a narrative arc analysis across periods. Useful for understanding the shape of your data over time. **Setup:** - n8n runs on Render free tier (Docker image `n8nio/n8n:latest`) - UptimeRobot pings `/healthz` every 5 minutes to prevent sleep - Supabase service role key in Render env vars - Anthropic API key in n8n credentials **Repo:** GitHub - ProyectoAna/zero-cost-ops: Production monitoring for solo founders. $0/month. No excuses. CTO dashboard + n8n pipelines + Supabase — all on free tiers. · GitHub The workflow JSONs are in `/workflows/` — importable directly into n8n. Would love feedback on the architecture, especially the temporal clustering logic in WF-04. Anyone else doing timeline analysis on their n8n data?

nice work on the zero-cost stack, honestly Render + Supabase + GitHub Pages is underrated for solo SaaS ops. the Claude classification for normalizing mixed content types into one schema is smart tho — Slack messages and GitHub events have very different shapes so doing it at ingestion makes sense.

curious about WF-04, how are you handling date_estimated when its approximate or a range? and does the narrative arc detection write back to the same Supabase table or build something separate?

Since the schema needs to handle both a specific approximation (e.g., “around Q2 2024”) and a full range (e.g., “2023–2025”), I’m storing it as a text field rather than a timestamp. Claude normalizes whatever the source says into a structured string like ~2024-Q2 or 2023/2025 at classification time. There’s also a companion boolean date_is_approximate flag on the row so downstream queries can filter or weight accordingly without parsing the string every time.

On narrative arc detection write-back:

It does write back to the same neyen_entries table — specifically updating the narrative_role field that Claude assigns during ingestion (WF-01). The six roles (anchor, echo, tension, resolution, context, wildcard) live inline on each entry. The arc itself emerges from WF-02’s crosslink graph: when 5+ entries cluster with shared roles, WF-03 fires a Slack alert. So there’s no separate arc table — the graph edges in crosslink_edges are the arc detection substrate.