In this video, Matt Maher shares a real-time journey that began with a simple question to Claude Cowork about email and ended with a fully automated, personally tailored email monitoring system. Along the way, he catches himself falling into the most common AI prompting trap (over-specifying implementation details) and demonstrates the mindset shift required to get the most out of AI tools. The video covers connecting Gmail to Cowork, building categorized email buckets, creating a dashboard with agent teams, and scheduling automated runs.
Matt catches himself dictating implementation details to Claude (file structures, deduplication logic, cron jobs) instead of describing the outcome he wants. The core lesson: tell AI what you need and why, not how to build it. This mirrors common advice but is difficult to follow because our existing mental models pull us toward familiar technical approaches.
The end result is not an app or a product. It is a piece of software that exists for exactly one person. It does not need to be perfect or complete. If it breaks, you go back to Cowork and fix it. If you want something different tomorrow, you change it. This concept of "personal software" built through AI collaboration represents a shift from consumer apps toward bespoke tools shaped by conversation.
Cowork's connector system (in this case, Gmail) allows Claude to directly interact with external services. Matt connects Gmail, then simply asks conversational questions: "What sponsorship emails did I get in the last three days?" This transforms email from a chore requiring manual filtering into a conversational interface.
Cowork has a native scheduling capability that requires just three things: a task name, a prompt, and a schedule. The prompt must be self-contained. Matt sets his email monitor to run every four hours, and it reads the project rules, searches Gmail, deduplicates against state, and updates the dashboard data automatically.
Matt uses agent teams within Cowork to build dashboard prototypes: a UX designer agent produces concepts, a developer agent builds them in parallel, and a reviewer agent evaluates and picks winners. This multi-round process (seven rounds in this case) demonstrates how AI can handle iterative design workflows that would normally require a full team.