Hi everyone!
I work with several German clients who want to use AI automation in n8n but strictly refuse to send customer data (names, emails, IBANs) to OpenAI due to GDPR concerns.
Existing solutions were either too expensive (Enterprise APIs) or required sending data to another cloud (which defeats the purpose).
So I spent the weekend building a Local Privacy Node for n8n. It runs in a Docker container on your own server (no internet access needed) and sanitizes PII before it hits the LLM.
Here it is demo https://youtu.be/3Wkc6_lPpCA
How it works:
-
You pass text: "Contact Hans Müller at [email protected] about the invoice."
-
The node converts it locally: "Contact <PERSON_1> at <EMAIL_1> about the invoice."
-
OpenAI/Germini processes the safe text.
-
The node restores real data in the response (Reversible).
I’m looking for 10-15 people to beta test this and tell me if it fits your workflow. If you deal with strict compliance clients, this might be a lifesaver.
Let me know if you are interested in Docker with this node.
WDYT? Is this something you’d use in production?
1 Like
I’m definitely interested and would love to try it out!
How does the sanitisation actually work? Are you running a local small LLM
inside the docker container?
1 Like
Thanks for the interest!
To answer your question: no, I’m not running a heavy generative LLM (like Llama or Mistral) inside the container. That would be way too slow and resource-intensive for most self-hosted n8n setups.
Instead, I’m using a hybrid engine. It combines transformer-based NER models (similar to Spacy/BERT) to detect context-dependent entities like names and organizations, plus deterministic rules and regex (via Microsoft Presidio) for structured data like IBANs or emails.
I chose this approach mainly for speed and reliability. It processes text in milliseconds on a standard CPU and doesn’t hallucinate like LLMs can. Plus, the core feature is reversibility — it generates a mapping token so you can reliably de-anonymize the data after getting the response from OpenAI, which is really hard to do with just a local LLM prompt.
I’m wrapping this up in a lightweight Docker container right now. Let me know if you want to be pinged when the beta image is up!
1 Like
Hey Kirill, interesting approach with local processing. We built PromptLock which does similar PII redaction but hosted - includes IBAN detection and prompt injection protection in same call. Happy to compare notes or see if there’s overlap for different use cases.
Hey, Matthew! I’ve created landing page with information about this approach at securenode.app and you can also try docker following guide at securenode/README.md at main · Vankir/securenode · GitHub Don’t hesitate to ask me any questions if you have. The docker uses small NLP models by default but you can pass names of large models in environemnt variables to get higher accuracy.
This is cool, I will definately test this out.
I will appreciate your feedback!
for those who is interested to test, you can find more information and download docker at https://securenode.app
Nice!… but only 4 languages at moment?
What language are you interested in?
Well, at the moment let say Italian,Romanian.
thank you for your feedback! I will check if I can extend list of supported languages
1 Like