How Claude Cowork Does My Job While I Walk the Kids to School
Five layers — files, plugins, sub-agents, memory, and scheduled tasks — that turn Claude Cowork from a chatbot into the coworker running ops before you've had coffee.
If you're like most people, you've probably heard of Claude Cowork by now, but have only a vague idea of what it actually is, or how it can help you get more out of AI compared to “regular” Claude.
I run an operations consultancy supporting startups and lean teams. Automation, analytics, integrations — the kind of work that used to require a large team. Over the past few months, I've wired Claude Cowork into nearly every part of how I deliver client work. This post explains what that looks like and why the result is something totally different from chatting with an LLM.
Your files, its workspace
The first thing that sets Cowork apart is pretty simple: it runs locally on your machine and can work with your actual files.
Wait, what? Haven’t we been conditioned over the past ~20 years to want everything to be handled in the cloud? Yes, we have, but hear me out.
You select a folder, and Cowork can read, create, and edit files inside it. Documents, spreadsheets, slide decks, scripts, CSVs, markdown files, PDFs, whatever. When you ask it to build a financial model, it creates an actual working .xlsx file on your desktop that you can open in Excel. When you need a presentation, it produces a .pptx file with real slides, rather than a bulleted outline you have to rebuild yourself.
This changes how editing works, too. In a regular Claude session, if you paste in a long document and ask for a small change, the model has to regenerate the entire thing from what's in the conversation. Every edit is a full rewrite, which means drift, which in turn forces to you to re-check sections that were already fine. In Cowork, the full document lives on your hard drive. The agent reads the specific section it needs to touch, makes a surgical replacement, and leaves the rest untouched. For anything longer than a page or two, the difference in speed and accuracy is immediately noticeable.
This sounds like a minor quality-of-life improvement until you experience the cumulative effect. I write and revise markdown documents, client deliverables, and configuration files in Cowork constantly. The feedback loop is much faster vs. working with a standard chatbot: I describe the change, it happens in place, I review the file. And, if the folder you’re working in happens to be configured as a Git repo, then you also have full version control and diff history. The whole experience is far better than asking the model to make a tiny edit and then wait as it regenerates the entire 10-page document seemingly from scratch.
Plugins: skills and connectors, bundled
If you've used Claude before, you may have encountered connectors (which let Claude talk to external services like Notion, Gmail, or your calendar) and skills (sets of instructions that tell Claude how to do a specific job well). Both of those exist outside Cowork. What Cowork adds is the plugin: an installable package that bundles skills, connectors, and configuration into a coherent toolkit tailored to how you actually work.
Cowork has a plugin marketplace. My personal favourite is (naturally) the productivity plugin, which comes with a memory management system, task tracking that works across multiple tools, and skills for keeping everything in sync. Install it, and Cowork can manage your tasks in your issue tracker, update your project boards, and maintain a working understanding of your priorities — all without you having to explain how those systems fit together. A data analysis plugin bundles skills for SQL queries, statistical analysis, data visualisation, and dashboard building. A legal plugin handles contract review, NDA triage, and compliance checks.
The plugin marketplace is full of cool stuff
Importantly, all of these plugins are also designed to customised. Every company, for example, has its own priorities for legal contract review, and I’ve tailored the productivity plugin to my tool stack and preferences, like using Linear for tracking client delivery work. I've also written my own skills that sit alongside it for engagement scoping, CRM lookups, and data migrations. I've written more about skills separately, but the key point here is that the plugin is what ties skills and connectors together into something that's ready to go the moment you open a session.
A good example of how this works in practice: you might have your Notion workspace connected to Claude, but if you're like most companies, you have a lot of stuff in there, and, well. it’s probably not super well-organised. A connector gives Claude access to a Notion workspace, but a skill tells Claude how your Notion is organised: where meeting notes live, where projects are tracked, which property to check to determine whether a task is done. Without that skill, Claude has to rediscover all of that context every time it touches the connector. It might enjoy repeating that thrill of discovery with each new prompt, but you probably don’t. With the skill packaged with it, the connector becomes dramatically more useful, because Claude already knows where to look and what to look for.
Sub-agents: divide and conquer
Cowork can spawn sub-agents: lightweight, independent workers that handle a specific part of a task and report back. This is modelled after how Claude Code works for developers, and the practical benefit is the same: the main agent stays focused on your question while the sub-agent goes off and does the legwork.
Here's a real example. I'd spent a session making changes to how a data sync worked, which involved adjusting field mappings and updating how certain edge cases were handled. At the end, I asked Cowork whether any of the changes we'd just made would require updates to a related scheduled task that runs every morning. Rather than trying to hold both the conversation context and the scheduled task's configuration in its head at the same time, Cowork spawned a sub-agent to search across my workspace for the scheduled job, read its current configuration, and report back what it found. The main agent then compared that against the work we'd done and gave me a clear answer.
The result is that Cowork can handle questions that span multiple contexts — what we just did, what's already deployed, how they relate — without losing track of any of them. Each sub-agent gets its own focused context window, does its job, and returns the output. The main agent then synthesises the results. It's analogous to one person trying to hold everything in their head vs. a small team dividing the work up sensibly.
Spawning a sub-agent to figure out whether we need to update a recurring scheduled task
Memory: an index vs. a scratchpad
By now, persistent memory across chats is table stakes in AI. Both ChatGPT and Claude remember your preferences, names that keep coming up, and the broad strokes of what you're working on between sessions. That's useful for casual use, but it's also a single flat data store that the model curates in the background — good enough for basic preferences, but not purpose-built for running complex projects with moving parts, multiple stakeholders, and accumulating context that needs to stay internally consistent.
What Cowork does differently, and what the Productivity plugin supercharges, is a complex, file-based memory system that the agent can recursively update as a project progresses. It's not chat history, and it's not a single blob of preferences. It's structured files that the agent reads at the start of each session, organised by type: people, projects, references, and feedback. And crucially, it rewrites those files itself as new information arrives, so the memory doesn’t go stale.
This isn't a novel idea. Anthropic's own research on context engineering describes the underlying pattern as "just in time" retrieval: rather than stuffing everything into the prompt upfront, the agent maintains lightweight references and loads relevant context on demand. That mirrors how humans work; we don't memorise entire project histories, we build indexing systems and look things up when we need them. The memory system is exactly that kind of index.
The memory setup for a Cowork project where I’m making changes to our website
Setting up that index is the first thing you do for a new project. When you initialise the Productivity plugin, part of the setup is telling it where to look: the specific Slack channels where the project lives, the Notion projects and docs that hold the relevant background, the Linear teams or projects tracking the work, and any other sources that matter for that engagement. From there, whenever Cowork refreshes its memory files it already knows which slice of each tool is actually relevant and ignores the rest. You're giving it a map of where the signal lives for this project, so what it builds is scoped and precise rather than a generic dump of everything you've ever touched.
Once those sources are wired up, the memory files become incredibly useful. On one engagement, mine include a glossary of the client's internal acronyms (they use about fifteen abbreviations that mean nothing outside their organisation), individual files for each person I interact with (their role, communication preferences, what they're responsible for), a detailed project file tracking the current state of every active workstream, and technical reference files documenting API patterns, browser automation selectors, and data warehouse quirks I've discovered along the way.
Scheduled tasks: stop doing the same stuff every day. Manually. Like an animal.
Most people don't know about this one yet: Cowork can run tasks on a schedule, without you being present (as long as the app is open on and your machine is awake).
Right now, I'm in the middle of a system migration for a client, moving them off a legacy tool and into Notion. During the transition phase, both systems are live: the client is still creating and editing records in the old tool while the team starts working out of Notion. I have Cowork running a scheduled task every weekday morning that handles the sync end to end. It executes SQL queries against the legacy system to pull anything new or changed since the last run, then runs a set of Python scripts that upsert those records into the right Notion databases, resolving relations and normalising dates to the client's timezone as it goes. Once the load is done, a second round of Python scripts performs data validation and writes any discrepancies into a report for me to dig into once I’ve logged in.
The whole thing runs while I'm still walking the kids to school. Most mornings the report says "all clear" and I move on. When it doesn't, I've already got the exact rows to investigate, rather than spending the first hour of my day running the migration manually and then trying to figure out where something went wrong.
This is where Cowork really evolves from "assistant" to "operational infrastructure." It's doing work on a schedule, maintaining data integrity across systems, and surfacing exceptions for my attention.
What this adds up to
None of these layers is revolutionary on its own. File access is just a file system. Plugins are just packaged configuration. Memory is note-taking. Scheduled tasks are cron jobs. Individually, they're things any competent ops person could set up with enough time and the right tools.
The difference is that they're all orchestrated by an agent that can reason about them together. When the daily sync flags an anomaly, Cowork can check the memory for known edge cases, query the source system for context, and draft a message to the right person, all in one go. When I'm scoping a new engagement, the skill pulls my pricing framework, the memory recalls what I've learnt from previous projects, and the connector checks my calendar for availability.
I don't have a large team of analysts, engineers, or project managers. But what I do have is a persistent, context-aware operational layer that connects to my tools, encodes my methodology, and gets better every time I correct it.
If you're curious about setting up Cowork for your own workflows, or if you want help wiring it into your operations, get in touch!
We built a contact enrichment pipeline for our Notion CRM using Apollo.io, HarvestAPI via Apify, and Zapier — for roughly $0–5/month. Here's the full architecture, the cost breakdown, and why CRM vendors charging $2–3K/year for the same capability is a broken model.
Lessons from building multi-turn AI agents across Zapier Agents, Notion Custom Agents, and Relevance AI — covering context window management, system prompt bloat, sub-agent trade-offs, and context rot.
Skills that invoke themselves, integrations that write back to your tools, and code that works on the first attempt. Why I haven't looked back since switching to Claude.