Help us test the new AI Evaluation feature 🤖

Hey community :wave::wave::wave:

We’re working on a new Evaluation feature for AI workflows.
As you know, AI workflows can be unpredictable, so we’re building a way to evaluate them properly and make sure they’re reliable and return the expected outputs, even when you tweak a prompt, switch models, or something changes under the hood.

We’re looking for users who have experience building AI workflows and are up for joining a 30-minute usability testing session to give us feedback.

If that sounds like you, drop me a DM. Would love to have you involved!

Thanks :pray:

77 Likes

Would love to be involved in this - I’ll drop a DM

I am interested, you can call me via email: [email protected]

We love to join this event, please mail us [email protected]

1 Like

I would love to join in this event, send me a DM.

My email: [email protected]

Yes please. [email protected]

Yes :grinning::grinning: [email protected]

Dunno how to DM you but I can help. [email protected]

I’m showing my age here because I can’t figure out a way to DM you. but I’d be very interested as human in the loop would make a huge difference to the reliability of AI workflows as you’ve pointed out. Can this run on the self-hosted version (the test, that is)?

If so, you can reach me at: [email protected]

I’m in, [email protected]

would love to be involved, we have built and cursed :slight_smile: a lot of Agentic Workflows with a ‘manager’ agent.

[email protected]

I have a few workflows using AI nodes and am actively developing new workflows with AI.

I am not a developer but more a hobbyist. I would like to try it out and get involved. I tried locally n8n on my nvidia AGX Orin in the past.

I’m interested. Call me if necessary via DM.

hey @giulioandreini - count me in! :wink:

We have started to build an integration, would love to discuss [email protected]

Hey Giulio, I’d love to help out with this. Count me in!

Yes, i am interested