Evaluation Observability AI Operations Workflow Engines

Agent Evaluation Checklist: How to Build, Run, and Ship Agent Evals

Build effective agent evaluation systems by starting with simple, high-signal end-to-end evals and iteratively increasing complexity. Use observability tools like LangSmith to analyze real agent traces, define clear success criteria, and separate capability vs regression evals. Focus heavily on failure analysis by categorizing issues (prompt design, tool interfaces, model limits, or data gaps) before automating evaluation. Leverage evaluation levels—single-step (run), full-turn (trace), and multi-turn (thread)—with trace-level evals as the most practical starting point. Ensure infrastructure issues are ruled out, assign ownership to a domain expert, and validate not just outputs but real-world state changes. This approach improves agent reliability, debugging, and continuous performance optimization.

Deploying Long-Horizon Agents in Production with Durable Execution and Deepagents Deploy

Learn how to deploy long-running AI agents reliably using purpose-built runtime infrastructure. This guide explains durable execution for resuming agent workflows after failures, checkpoint-based memory for short- and long-term state, human-in-the-loop interruption and resumption, and production-grade observability with tracing and replay. It details how LangSmith Deployment (LSD) and Agent Server provide primitives like task queues, persistence via PostgreSQL, RBAC-based multi-tenancy, middleware guardrails, streaming, and cron scheduling. Discover how deepagents deploy packages these capabilities to eliminate infrastructure overhead and enable scalable, fault-tolerant agent systems.

Reusable Evaluators and Template Library: LangSmith Eval Updates

LangSmith introduces reusable evaluators and a library of 30+ evaluator templates to standardize and scale agent evaluation across projects. Teams can define evaluation logic once and apply it across tracing workflows, ensuring consistent safety checks, response quality metrics, and trajectory validation. The templates cover safety (prompt injection, PII, toxicity), response quality, multi-step agent trajectories, user behavior analysis, and multimodal outputs. These evaluators support both online monitoring of production traffic and offline experimentation, enabling teams to detect failures, analyze agent decisions, and continuously improve performance without rebuilding evaluation logic from scratch.

Better-Harness: Using Evals to Iteratively Improve Agent Harnesses

Use evaluation-driven feedback loops to iteratively improve agent harnesses and achieve better generalization in production. Better-Harness treats evals as training data for agents, where each test case provides a learning signal to optimize prompts, tools, and workflows. The system combines curated eval sourcing (hand-written cases, production traces, external datasets), structured tagging for behavioral coverage, and holdout sets to prevent overfitting. It introduces a compound system approach—data sourcing, experiment design, optimization, and human review—to continuously refine agent performance. Key practices include mining production traces for failures, using tagged eval subsets for cost-efficient testing, and pairing automated improvements with human validation to avoid reward hacking and ensure real-world reliability.