The idea is:
We can enhance the Agent nodes to include a feature for anonymizing sensitive information before interacting with an LLM (Large Language Model). This would provide built-in support for anonymization, ensuring that personal or critical data do not expose sensitive data such as phone numbers, names, or other identifiable information.
My use case:
I work with personal documents that need processing via an AI agent on n8n. Currently, I have to rely on the LangChain code and external tool like Presidio to anonymize sensitive information. Integrating this functionality directly into the Agent node would streamline my workflow and ensure better privacy for consumers that rely on public LLM services.
I think it would be beneficial to add this because:
- It enhances data privacy by minimizing the risk of sensitive information leakage.
Any resources to support this?
- Presidio: Home - Microsoft Presidio
- LangChain Documentation: https://docs.langchain.com/
Are you willing to work on this?
Yes, I am willing to collaborate by developing and/or testing the feature. However, I may need assistance with the development process because I’m not a JS developer (I’m already trying to implement this feature, but the code will probably need some improvements).