Agentic Fever: Master Yourself Before Your Agents

Study Guide

Overview

Matteo Cassese, creator of the "Think with Matteo Cassese" channel, shares a candid reflection on what he calls "agentic fever" — a burnout-adjacent condition affecting AI practitioners who feel compelled to constantly build, test, and deploy AI agents. He traces the rapid evolution of AI from simple chatbots to autonomous agent orchestrators, then offers a two-part philosophy for navigating this moment: one side focused on human well-being, the other on effective machine instruction.

Key Concepts

What Is Agentic Fever?

Agentic fever is the feeling that if you are not actively coding, deploying, or orchestrating AI agents at every moment, you are wasting time and falling behind. Cassese describes it as reaching the "crispy" stage of burnout — a state driven by FOMO (fear of missing out) and the relentless pace of AI tool releases.

The AI Evolution Timeline

  • Chatbot Era: The initial phase where AI's artifact was conversation and the key innovation was context — unlike Siri, ChatGPT could maintain a coherent dialogue.
  • Coding Tool Era: With tools like Claude Code, Codex, Windsurf, and Lovable, the artifact became code. The innovation shifted from conversation to craft — users gained a "junior engineer" on demand.
  • Agent Era: The coding tool itself becomes the agent. Rather than just producing code, it runs as a persistent process, enabling self-improvement loops and compounding gains.
  • Agent Orchestration Era: The next step where one agent coordinates a team of sub-agents, each potentially using different models at different cost tiers, creating a full "workforce."

From Friend to CEO

Cassese observes that in roughly two and a half years, AI has evolved from a "friendly conversational system" into something that functions more like a CEO — capable of orchestrating work, delegating tasks, and driving compounding improvement. This rapid shift is precisely what triggers the human psychological response he calls agentic fever.

The Two-Part Agentic Philosophy

Part 1: The Human Side (More Important)

  • You are not missing out. If you are using Claude Code, agent mode in ChatGPT, or similar tools today, you are already on the bleeding edge.
  • AI needs you sharp. Anxious, sleep-deprived humans produce frantic, anxiety-driven agents that fail. Good inputs produce good outputs — systems thinking applies to humans too.
  • Maintain your routines. Sleep, health, calm — these are not luxuries but prerequisites for effective agent instruction.
  • You are the trainer. Humans remain the instructors, the quality-checkers, the vision-setters. That role requires clarity of mind.

Part 2: The Machine Side (More Detailed)

  • Language is now code. Simple Markdown SOPs (Standard Operating Procedures) are equivalent to code. Writing clear instructions is the new programming.
  • You are not here to build — you are here to instruct. The paradigm shift is from writing code to writing instructions for agents that write code.
  • Your data is the training. Connect your data (Google Sheets, file systems, APIs) to agents — this data becomes the agent's training ground.
  • Instruct agents to log, review, and improve. Build self-improvement routines where agents examine their own logs and suggest optimizations.
  • Tiered model architecture. Use expensive models for orchestration, mid-tier models for core work, and cheap models for brute-force tasks like web research.

Practical Example: The Outreach Machine

Cassese describes building an outreach system. His initial instinct was to think in terms of traditional software (CRM, web interface, sequential pipeline). Instead, he placed the agent at the center — the agent itself became the software surface, making API calls to email, LinkedIn tools, and Google Sheets as needed. The agent is not a tool that creates something; it is the something.

Key Takeaways

  1. Agentic fever is real and widespread — recognizing it is the first step to managing it.
  2. The AI evolution from chatbot to agent orchestrator happened in roughly two years, and the pace is causing legitimate psychological strain.
  3. The human side of agentic philosophy matters more than the technical side: stay sharp, sleep well, maintain routines.
  4. On the machine side, shift your mindset from "build software" to "instruct agents" — language is the new code.
  5. Agents should self-improve through logging, review, and compounding iteration cycles.
  6. We may be in a "malaise" phase on the path to a renaissance — discomfort is part of the transition.

Discussion Questions

  1. Have you experienced something like "agentic fever"? What does it feel like in your own work or learning?
  2. Cassese says "AI needs us sharp." What practices or routines help you maintain clarity when working with rapidly evolving tools?
  3. How does the shift from "building software" to "instructing agents" change the skills that matter most for technology professionals?
  4. What are the risks of placing an AI agent "at the center" of a workflow (as in the outreach machine example)?
  5. Cassese frames the current moment as "malaise to renaissance." Do you agree with this framing? What would the renaissance look like?
YouTube