Hello community
We are developing a new Evaluation feature for AI workflows.
As you know, AI workflows can be unpredictable, so we are creating a method to properly evaluate them and ensure they are dependable and produce the expected outputs, even when you adjust a prompt, change models, or if something is modified internally.
We are seeking users with experience in building AI workflows who are willing to participate in a 30-minute usability testing session to provide feedback.
If this sounds like you, please send me a direct message. We would be delighted to have you participate!
Thanks
Would be great to participate in this – I'll send a direct message.
I am interested, you can reach me via email: opedrolahr@gmail.com
We are excited to participate in this event. Please email us at tiger.ai.tw@gmail.com.
I'd be keen to participate in this event; please send me a direct message.
Yes please. luis.caramuru@salesgrowth.com.br
Yes
I can assist with this. You can reach me at xt3.ian@gmail.com.
I'm showing my age here because I can't figure out a way to DM you. However, I'd be very interested, as a human in the loop would make a huge difference to the reliability of AI workflows, as you've pointed out. Can this run on the self-hosted version (the test, that is)?
If so, you can reach me at: @danielrosehill.com">public@danielrosehill.com
I'm in.
would love to be involved, we have built and cursed
a lot of Agentic Workflows with a ‘manager’ agent.
I'm currently utilizing several workflows that incorporate AI nodes and am actively building new ones with AI capabilities.
I'm more of a hobbyist than a developer. I'm keen to try it out and get involved. Previously, I experimented with callin.io locally on my NVIDIA AGX Orin.