I’m a developer working with self-hosted n8n, and I’ve been following the AI assistant space around it. If you’re self-hosted, the official AI features aren’t available to you right now. There are third-party extensions, but they don’t always fit the needs of self-hosted setups - especially for teams with compliance requirements or air-gapped environments.
I know n8n announced in their recent livestream that they’ll bring their AI builder to self-hosted eventually, which is great. But they’re still figuring out the details (both technical and business-wise) with a decision expected this quarter. So for teams that need this now, or those with strict air-gapped requirements, there’s still a gap.
I’ve started building a browser extension with two main ideas:
First, privacy by design - it works with your own API keys (OpenAI, Anthropic, Gemini) or runs fully offline with Ollama. Workflow data stays local and never goes through third-party servers.
Second, I want to give users real control over how the AI helps. Not just “describe and generate,” but something closer to how tools like Cursor work with code - you decide what context the AI sees, whether it works step-by-step or generates full workflows, how much autonomy it gets. Basically a copilot with adjustable guardrails instead of a black box.
I’m still early in development (nothing ready to try yet), but I’m asking now rather than building in isolation because I’d rather make something people actually want.
Questions for anyone who’s tried AI tools with n8n:
- What’s your biggest frustration with the current options?
- If you could control how an AI assistant works with your workflows, what would matter most?
- For those on air-gapped or compliance-restricted setups: what stops you from trusting the existing tools?
- Knowing n8n will eventually ship their own solution, what would make a third-party tool still worth using?
I’ll be checking this thread and responding to everything. If you want to see the direction I have in mind, there’s an outline at flowavate.com with a waitlist for updates.
Hi @flowsmith Welcome!
I dont have any specific frustrations honestly i like n8n and it is evolving constantly so there is no frustration but i have some suggestion to give more support to nodes and add something like a ‘Make Your Custom Node’ and so with that i can convert that HTTP request node to my personal node with my own configs and use it anywhere in the n8n between flows anytime.
The thing which matters the most for me is the reliability of using TOOLS correctly and more tool support and yeah a little more config options like easy local LLM support would be great!
No i have not encountered that kind of air-gapped experience but i would say sometimes things dont work just randomly they dont work until i manually restart or just do something random with it , this happens with HTTP nodes mostly.
third party tools will always be worth using first of their open source nature and second that n8n-team cannot create each node for each service as soon as it comes live, so third party nodes will always be useful!
Thanks for the feedback!
The custom node idea is interesting - basically making it easier to package and reuse HTTP configurations, right?
On the local LLM support you mentioned - are you thinking about using models like Llama or Mistral locally? That’s actually one of the things I’m focused on with Ollama integration. What’s your use case for wanting local models over something like OpenAI?
Also, you mentioned reliability issues with HTTP nodes - does that happen when you’re building workflows manually, or have you tried any of the AI generation tools (like n8nChat or the others)?
@flowsmith Yes! That would be user configured and basically an element that users can create themselves or a ‘HTTP Node Builder’
Yeah! Using local models for testing like its highly unlikely someone would do that but if i have a good machine and i can run Lm-Studio and pick GPT-OSS i should have something to easily configure the local models, it is there i use it but it is not as seamless as a normal user would want it to be. To be honest i use local models to process sensitive data and it works nicely i use GPT-OSS that 120B model and Lm-studio makes it very seamless.
Yeah i would say most of the times HTTP node works but that 5% when it does not work that is serious as in production it can cause some issues , i never have seen that in production but in testing it sometimes does it, and for the Building n8n workflows i really dont use those tools i love n8n and building workflows for business and it is easy to just do it yourself then make an AI understand the context and later fixing bugs and configuring.
@Anshul_Namdev, makes sense - if you’re already fast at building workflows manually, AI assistance is probably more hassle than help. The sensitive data angle with local models is exactly what I’m thinking about though, so good to know that’s a real use case.
Thanks for the input!
1 Like