§ Topic
LLMs: How Large Language Models Power AI Agents
How large language models power AI agents — capabilities, limitations, and the technical foundations behind modern agent systems.
Large language models are the reasoning engine behind modern AI agents. Understanding how LLMs process context, generate outputs, and interact with tools is essential for building agents that work reliably.
These posts examine the technical foundations — how LLMs handle tool calling, what context window management means in practice, where model capabilities create real constraints, and how to work with (not against) the probabilistic nature of language model outputs.
Topics include tokenization and context window economics, temperature and sampling parameter effects on agent behavior, model selection criteria for different agent tasks, fine-tuning versus prompt engineering tradeoffs, and strategies for managing model version transitions without breaking production systems. If you’re making architectural decisions about which models to use and how to use them, these posts give you the technical grounding to choose well.
Past a 10-step trajectory with read-heavy tools, your agent is bottlenecked on tool latency, not LLM throughput. Speculative tool execution fires the predicted next call while the model is still emitting tokens, then promotes or discards on commit. PASTE (arxiv 2603.18897, Microsoft Research, March 2026) reports a 48.5% task-completion-time reduction. The companion UMD/LLNL paper (arxiv 2512.15834) layers client-side and engine-side speculation for an additional 6 to 21%. The technique reduces to two design decisions: a predictor and an eligibility policy. Get either wrong and you ship a billing incident or a data-safety incident.
Conventional prompt-injection defenses—input classifiers, spotlighting, fine-tuned refusal heads—plateau somewhere around 95% detection. In application security, 95% is a failing grade: the remaining 5% is a repeatable exploit. CaMeL (Capabilities for Machine Learning), introduced by Debenedetti et al. at DeepMind in arXiv:2503.18813, does not try to push that number higher. It changes the shape of the problem. Split the model in two—a Privileged LLM that never reads untrusted data, and a Quarantined LLM that reads data but cannot call tools—and enforce an information-flow policy on every value that crosses the boundary. What you lose is seven points of utility on AgentDojo (84% undefended to 77% defended). What you gain is a security property you can prove, not just measure.
The “long context” number in your model card is marketing. Past roughly 80K tokens, your agent’s tool-calling accuracy falls off a cliff—and padding the window with more tokens is the most expensive way to get worse results. The engineering answer is not a bigger window. It is a structured compaction operation, borrowed from research published in late 2025 and early 2026 under names like FoldGRPO, AgentFold, and ACON, and now shipping as a first-class API primitive in Anthropic’s context-management-2026-01-12 beta. The common label for the technique is context folding: replace a settled segment of the trajectory with a learned summary, evict the raw tokens, and keep executing against the compressed artifact.
The industry’s mental model of prompt injection is session-scoped: attacker crafts a malicious input, it executes in the current context, the session ends, and the attack ends with it. Defenses are designed around this model—input filtering, system prompt hardening, output validation. Every major framework has a story for it.
Memory-augmented agents break that model entirely.
When your agent writes to long-term memory—and most production agents running in 2026 do—a successful injection doesn’t need to execute immediately. It can plant a record, go dormant, and activate three sessions later when a semantically related query retrieves it. The attacker doesn’t need to be in the session at exploit time. The agent’s own reasoning, presented with a poisoned memory entry it trusts, does the rest. Forensics are brutal: the bad decision looks indistinguishable from the agent’s own learned behavior.
The most important skill in production AI in 2026 is not prompt engineering, model selection, or fine-tuning. It is context engineering — the discipline of designing everything the model sees at inference time. A weaker model with well-engineered context consistently outperforms a stronger model with bad context. Anthropic’s own evaluation showed that Claude Code with proper context engineering via MCP achieved an 80% quality improvement over the same model without it. LangChain’s 2026 State of Agent Engineering report confirms the pattern: context engineering is the top difficulty for 57% of organizations running agents in production.
Most explanations of AI agents stop at “the LLM decides which tool to use.” That’s the easy part. The hard part is everything around it: how you define tools so the model actually picks the right one, how you handle failures mid-chain, and how you keep a stateless model acting like it has memory.
This post breaks down the core loop that powers every agent we build at Replyant.
The Agent Loop
Every AI agent, regardless of framework, runs the same fundamental cycle: