Guadrails - Secure prompt handing

The idea is:

Replicate guardrails feature that was just shown / demonstrated by OpenAI

My use case:

Using n8n agents via web-hook to drive application logic. Want standard protection against nefarious prompt behaviour

I think it would be beneficial to add this because:

Quicker implementation of n8n into custom apps. Better user experience.

Any resources to support this?

OpenAI

Are you willing to work on this?

Yes