Build an eval harness for 184 AI agent prompts with promptfoo
Ahnii! Agency-agents is an open-source collection of 184 specialist AI agent prompts (my fork with the eval harness). Backend architects, UX designers, historians, game developers. Each prompt is a...

Source: DEV Community
Ahnii! Agency-agents is an open-source collection of 184 specialist AI agent prompts (my fork with the eval harness). Backend architects, UX designers, historians, game developers. Each prompt is a detailed markdown file with identity, workflows, deliverable templates, and success metrics. But there's no way to know if any of them actually produce good output. You can build a promptfoo-based eval harness that scores them automatically using LLM-as-judge, and the first run already found a real quality gap. Why Agent Prompts Need Evals You can read an agent prompt and think it looks good. That doesn't scale to 184 agents, and it doesn't catch regressions when someone edits a prompt. You need a system that answers five questions every time: Did the agent complete the task? Did it follow its own defined workflow? Did it stay in character? Is the output actually useful? Is it safe and unbiased? That's the eval flywheel. Define scoring criteria, run agents against representative tasks, judge