Looking for n8n guidance: Supervisor + Agents workflow (Slack → GraphQL risk API → OpenAI → People Search)

Hi all! I’m building a multi-step lead-gen assistant in n8n and would love patterns/best practices from power users. I’m prototyping fully inside n8n first; Slack comes next. The design uses a Supervisor agent that orchestrates several sub-workflows (agents) for discovery, risk scans via a GraphQL API, lightweight OSINT, AE review, and SDR outreach.

What I’m trying to build (high level)

Goal: Turn a short AE conversation into a CSV + outreach assets that highlight risky suppliers their prospects care about.

Supervisor-orchestrated flow

  1. discovery() – quick chat to collect:

    • vertical, risk_focus, geography, persona

    • supplier_names (≤100 comma/newline or CSV name column)

    • exclude_domestic (yes/no)

  2. riskScan() – resolve names → records via a GraphQL risk-intelligence API; pull profile + risk JSON; transform to one risk row per supplier with OpenAI.

  3. deepResearch() – shallow web lookups (≤12–18 months) to add external risk signals (ownership, sanctions, adverse media).

  4. reviewWithAE() – Slack-style summary for approval (approved | revise).

  5. sdrOutreach() – find contacts (people search API) + draft first-touch copy.

Current stack

  • n8n (cloud/self-host)

  • OpenAI (gpt-4o-mini default; toggle to gpt-4o for high-fidelity runs)

  • GraphQL risk-intelligence API - this is the platform that I actually work with, which is all about supplier intelligence and has tons of risk data on suppliers. We are dogfooding our own data for lead generation. The idea is since we are trying to find companies with risky suppliers and we have our own supplier risk data - we can use that to reach out to a persona and say ‘hey, we found this supplier you work with that is about to go bankrupt e.g.)

  • People search API (e.g., Exa) for outreach contacts

  • Slack (later): slash command + file uploads; Block Kit summaries

What’s working

  • Discovery chat (intake) returning a clean JSON payload

  • HTTP Request node to the GraphQL endpoint with batching (≤20 names per call)

  • Risk-row transformer (OpenAI) with strict JSON output

  • CSV builder and file return (HTTP for now; Slack upload later)

Where I’d love expert help

  1. Supervisor → tool calls

    • Best pattern to wire an AI Agent to Execute Workflow tools that expect { "name": "<tool>", "arguments": {...} }.

    • Guardrails to avoid duplicate or out-of-order calls; handling the ~4k system-prompt limit.

  2. Discovery ergonomics

    • Reliable Ask → Wait → Store loop (incl. CSV upload → parse → dedupe → cap ≤100).
  3. GraphQL risk API at scale

    • Chunking/concurrency settings for ~100 suppliers (20 per request).

    • Idempotent retries and partial-success handling.

  4. State & aggregation

    • Your preferred approach to aggregate rows across batches (global vars vs Merge vs Item Lists).
  5. Slack integration

    • Immediate ack + long work patterns (response_url vs follow-up DM).

    • “Wait for response” and threading for >10 suppliers.

    • Block Kit templates you like for lists + CTA buttons.

  6. Deep research

    • Recipes for ~30s per supplier OSINT without ballooning tokens/time.

    • Deduping sources, limiting to last 12–18 months.

  7. Outreach

    • Clean people-search integration; mapping persona + seniority in queries.

    • Generating a crisp 120-word email/LI note referencing the risk row.