TACO Framework Reduces Agentic Token Overhead ~10% on SWE-Bench

TACO is a self-evolving agent framework that compresses redundant observations in terminal agents by learning compression rules directly from agent trajectories. The framework maintains a global rule pool that agents apply to improve long-horizon reasoning, reducing token overhead by approximately 10% across SWE-Bench benchmarks. The paper is published on HuggingFace Papers.

Why It Matters

A 10% reduction in terminal agent token overhead translates directly to cost and latency savings in any agentic coding or operations workflow. The self-evolving rule pool approach — learning from production trajectories rather than requiring manual optimisation — also points toward a class of frameworks that improve automatically as they accumulate runtime data, compounding efficiency gains over time.