Evaluation Observability Workflow Engines AI Operations

Better-Harness: Using Evals to Iteratively Improve Agent Harnesses

Use evaluation-driven feedback loops to iteratively improve agent harnesses and achieve better generalization in production. Better-Harness treats evals as training data for agents, where each test case provides a learning signal to optimize prompts, tools, and workflows. The system combines curated eval sourcing (hand-written cases, production traces, external datasets), structured tagging for behavioral coverage, and holdout sets to prevent overfitting. It introduces a compound system approach—data sourcing, experiment design, optimization, and human review—to continuously refine agent performance. Key practices include mining production traces for failures, using tagged eval subsets for cost-efficient testing, and pairing automated improvements with human validation to avoid reward hacking and ensure real-world reliability.

Deploying Long-Horizon Agents in Production with Durable Execution and Deepagents Deploy

Learn how to deploy long-running AI agents reliably using purpose-built runtime infrastructure. This guide explains durable execution for resuming agent workflows after failures, checkpoint-based memory for short- and long-term state, human-in-the-loop interruption and resumption, and production-grade observability with tracing and replay. It details how LangSmith Deployment (LSD) and Agent Server provide primitives like task queues, persistence via PostgreSQL, RBAC-based multi-tenancy, middleware guardrails, streaming, and cron scheduling. Discover how deepagents deploy packages these capabilities to eliminate infrastructure overhead and enable scalable, fault-tolerant agent systems.

Reusable Evaluators and Template Library: LangSmith Eval Updates

LangSmith introduces reusable evaluators and a library of 30+ evaluator templates to standardize and scale agent evaluation across projects. Teams can define evaluation logic once and apply it across tracing workflows, ensuring consistent safety checks, response quality metrics, and trajectory validation. The templates cover safety (prompt injection, PII, toxicity), response quality, multi-step agent trajectories, user behavior analysis, and multimodal outputs. These evaluators support both online monitoring of production traffic and offline experimentation, enabling teams to detect failures, analyze agent decisions, and continuously improve performance without rebuilding evaluation logic from scratch.

Continual Learning in AI Agents Happens Across Model, Harness, and Context Layers

Learn how AI agents improve over time by optimizing three distinct layers: model weights, harness infrastructure, and external context/memory. The piece breaks down techniques like SFT and RL for model updates, harness optimization via trace analysis and systems like Meta-Harness, and dynamic context learning through persistent memory, tenant-level configuration, and runtime updates. It highlights practical strategies such as offline evaluation loops, agent trace logging, and 'dreaming' workflows to iteratively refine agent performance without retraining models, emphasizing scalable alternatives to weight updates.