Why I Switched from ChatGPT to Claude (and Haven't Looked Back)
Skills that invoke themselves, integrations that write back to your tools, and code that works on the first attempt. Why I haven't looked back since switching to Claude.
Sometime in the second half of 2025, I started noticing a pattern. The builders and technical people I respect—the ones actually shipping things—kept mentioning Claude. Not in a "you should try this" way, but in a "this is what I use now" way.
Around the same time, I was growing uneasy with OpenAI. The stream of stories about ChatGPT and vulnerable users—particularly the mental health cases—made me question whether I wanted to keep building my workflows around their platform. I wasn't looking to switch, exactly. But I was open to an alternative.
So I tried Claude. And after two years of ChatGPT as my daily driver, I haven't looked back.
To be clear, both tools are remarkable, and neither is without flaws. But for the work I do—writing code, building automations, analysing data—Claude has become my go-to. Here's why.
Claude is just better at writing code
By now, this is one of Anthropic’s best-known structural advantages: Claude just seems to be much better at handling code.
Disclaimer: I’m not a real developer. But I do write some code, and I use AI constantly for technical work: writing SQL queries, building Zapier integrations in JavaScript, small Python code snippets for automations, and running statistical analyses in R. Claude Code and the regular Claude app with Opus 4.5 consistently produce cleaner, more accurate code than ChatGPT.
It's not only that the code works - Claude seems to understand context better. When I'm debugging a Zapier workflow or building a custom integration, Claude picks up on the nuances—the structure of the API I'm working with, the edge cases I need to handle, the tradeoffs between different approaches.
With ChatGPT, I often found myself in a loop: generate code, test it, debug it, regenerate, test again. With Claude, I'm more likely to get something workable on the first or second attempt.
Claude Skills >>> Custom GPTs
The Claude skills framework is genuinely superior to ChatGPT's custom GPT model.
With custom GPTs, you have to not only build but also decide upfront which specialised assistant you want to talk to. Need help with a marketing task? Open the marketing GPT. Need code review? Switch to the coding GPT. It's constant context-switching, and it breaks flow.
Claude skills work differently. You can define detailed, complex skills with specific instructions and context. But here's the key: Claude decides when to invoke them based on the task at hand. You don't have to think about which skill to use. You just work, and Claude figures out what you need.
💡
What is a Claude skill? A Claude skill is a reusable set of instructions and context that Claude can automatically invoke based on the task at hand. Unlike custom GPTs where you must manually switch between different assistants, skills work invisibly in the background—you just work naturally, and Claude draws on the relevant expertise when needed.
Under the hood, a skill is just a markdown file with metadata that tells Claude when to use it. The front matter includes things like a description of what the skill does, trigger conditions, and any constraints. Claude reads this metadata to decide which skill applies to your current task.
Skills can also be bundled with optional reference files that provide deeper context for specific tasks. For example, a writing skill might include a style guide, or a coding skill might include API documentation. These reference files give Claude the detailed knowledge it needs without cluttering the main instructions.
Also, the way that you build skills in Claude is far superior to the experience of creating a Custom GPT: you just talk to Claude and explain what you want, and it builds the skill for you (using a custom skill for building skills, naturally).
I know Anthropic recently open-sourced the skills framework, which is a great sign. But I’ve yet to see OpenAI adopt it, and I suspect they won’t implement it as well as it works in Claude if and when they do.
My Claude skill for building custom Zapier integrations. You can steal it here.
Transparency matters a lot to me
Claude’s usage model initially caused some friction for me, but I’ve come to really appreciate it.
With Claude, I can check at any time where I am against my usage limits. If I'm running low, I can pay for extra usage or just wait until my meter resets. It's straightforward.
This took some getting used to, because ChatGPT never talked to me about usage limits at all. The idea of having to wait for my usage limit to reset, or pay extra, felt weird at first.
Claude’s Usage screen
But the more I’ve thought about it, the more I’ve come to realise that Anthropic's approach is healthier. AI inference has real costs, in terms of compute, energy, and the environment. Running these models isn't free, and pretending otherwise creates weird incentives. I'd rather know what I'm paying for and make informed decisions about how to use my resources.
With ChatGPT, the usage model has always felt opaque. You don't really know where you stand. The model router just quietly throttles you when you've used too much, and you're left guessing whether you've hit a limit or the service is just slow.
ChatGPT’s settings don’t even have a usage section, as far as I can tell
This might seem like a small thing, but it compounds over time. When you're using AI as a core part of your workflow, transparency and an honest presentation of the costs and tradeoffs builds trust.
Memory that makes sense
There's another transparency advantage: how Claude handles memory.
ChatGPT pioneered persistent memory across chats, and it was genuinely useful. But Claude's implementation, which came later, is more transparent and more useful.
With ChatGPT, you can see a long list of random, specific facts it's remembered about you. That's helpful, but you don't really have insight into how it's summarising all of that information and feeding it back into the context window for your chats. It's a black box.
Claude lets you see and edit memory summaries—both at the overall account level and at the project level. You can understand what it knows about you, how it's organising that information, and adjust it when it gets something wrong. Claude refreshes its memory summary once a day, you can edit it at any time, and it maintains project-specific memories in the same way too.
ChatGPT shows you random facts it remembers about you; Claude shows you meaningful context
This matters more than it might seem. When an AI tool is making assumptions about your preferences, your workflow, or your domain knowledge, you want visibility into those assumptions. Claude gives you that. ChatGPT doesn't, really.
What is MCP? Model Context Protocol is an open standard that lets AI assistants connect to external tools and data sources. Instead of each AI company building custom integrations with every app, MCP provides a common language that any tool can speak.
In practice, this means an AI assistant with MCP support can read from and write to apps like Linear, Notion, GitHub, without those apps needing to build bespoke integrations for each AI provider. Anthropic developed and open-sourced the protocol, which is why Claude's implementation is particularly mature.
For tools I use constantly, like Linear and Notion, ChatGPT's connectors are usually read-only. I can ask ChatGPT to help me draft content or revise an issue, but then I have to copy and paste it back into the app myself. Like an animal.
Claude's MCP implementation allows write actions. I can paste a link to a Linear issue or a Notion page, work with Claude to refine it, and Claude makes the edits directly in those apps when we're done. No copy-and-pasting content back and forth.
I can even just paste in a screenshot of a long chat I’m having with a stakeholder, and have Claude create a detailed Linear issue for the ask:
I know Claude developed the MCP standard, so it makes sense their implementation would be strong. But MCP has been open for a while now, and it feels like an intentional choice by OpenAI to limit write actions in many of their connectors. Maybe there are safety concerns, or maybe it's strategic positioning. Either way, the practical result is that Claude integrates more seamlessly with the tools I use every day.
The switch was easier than I expected
Once I decided to make the switch, moving from ChatGPT to Claude as my primary model took less than a week to feel natural. Personalised memory was my biggest concern: I’d been using ChatGPT heavily for over a year, for both work and personal projects, and I thought there would be a lot of friction in getting Claude to “know” me well enough to be helpful.
But, as shown above, the fact that I could easily see and tweak what Claude knew about me made it far easier than I expected, and to get started, I just asked ChatGPT to create a detailed summary of our past work “together”, which I lightly edited and then fed to Claude.
There's a learning curve with any tool, but Claude's interface is clean, the mobile app works well, and the skills framework means I can replicate most of the custom setups I had in ChatGPT without the mental overhead of managing multiple specialised assistants.
I still use ChatGPT occasionally. There are specific tasks where it excels, such as web search, and I'm not dogmatic about tools. But for my day-to-day work—building automations, writing code, analysing data—Claude has become the default.
If you're thinking about making the switch, my advice is simple: try it for a week. Set up a few skills that match your workflow and see how it feels. You might be surprised at how quickly it clicks.
Thinking about how AI tools could streamline your operations? At work.flowers, we help teams integrate AI and automation into their workflows in ways that actually stick. Book a discovery call and let's talk about what's possible.