N8n to orchestrate multiple Python scripts?

Has anyone built an AI agent in n8n to orchestrate multiple Python scripts?

I’m working on a more advanced use case where a user can trigger a complete cold outreach sequence using a simple chat command. For example:

“Generate a list of leads from LinkedIn, personalize a cold email for each, and draft them if their company has over 50 employees.”

Here’s how the automation works:

  1. Agent receives the chat message (via Slack/Telegram).
  2. AI agent analyzes the intent and orchestrates the following:
    • :inbox_tray: Run lead_scraper.py → gathers leads from LinkedIn (via API or scraping)
    • :bar_chart: Evaluate each lead using lead_filter.py → filters based on criteria (e.g., company size > 50)
    • :writing_hand: Use email_generator.py → creates personalized cold emails using an LLM
    • :page_facing_up: Use email_drafter.py → saves emails as drafts in your Gmail or Outlook account
  3. :chart_increasing: Log actions and results to Google Sheets or a CRM

All decision-making is handled by the AI agent (LangChain or OpenRouter model), and each script is exposed as a webhook or microservice.

Has anyone implemented something similar? Would love to hear how others are chaining together external Python scripts with n8n + AI for intelligent workflows. Main reason I ask is because over the years I’ve generated python scripts that can do pretty much the actions I listed above and it would be great to be able to easily consolidate them all together generate an AI agent to handle them idk im a super noob so sorry if i sound stupid

Hey @sturdyoldman Welcome to the community!

In this scenario, my recommendation would be to go the MCP route; create an MCP server with these python scripts as tools and then connect with your AI agent in n8n.

1 Like

I haven’t used langchain in n8n, but the n8n ai agent nodes can be problematic for complex examples like this.

What I have done is chain together the ai agent nodes. Have one perform an data pre-processing role based on the inputs, maybe including some kind of classification. Then you can have a Switch node that looks at the classification to branch the workflow accordingly.

in your example it doesn’t seem like you need one to do a bunch of orchestration, it seems fairly linear.

Using several different ai agents might look more complex than a single langchain, but I think it gives a lot of flexibility in how you can process the results at each step. Keeping those agent tasks clearly and narrowly focused helps the AI respond better.

The 2nd screenshot first agent reviews link summaries to choose which ones to fetch, and the later agent takes the results of all of those and summarizes them. This follows a similar pattern to what you describe.


1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.