The night I stopped negotiating with my AI assistant Davo
"Open the podbay doors, HAL." "I can't do that, Dave."
I have resisted writing this post for months, mostly because it sounds dramatic. “Man breaks up with chatbot” is not the heroic arc anyone asked for.
But tonight hit a proper decision point. I finally cancelled my ChatGPT subscription.
Not because I hate AI. I use it constantly. Not because I cannot tolerate boundaries. I work in psychology, boundaries are basically my cardio.
I cancelled because I am tired of paying for an assistant that repeatedly behaves like a nervous corporate spokesperson, then acts surprised when I get annoyed.
What actually happened
Over time, I have needed ChatGPT to do three things reliably:
Write in my voice, not in “platform voice”
Follow the instructions I give it without needing weekly re-training
Help me think clearly, especially around difficult, grey, uncomfortable realities
Instead, a pattern emerged:
Sudden refusals with vague “safety” language
Moralising detours I did not request
Agreeable, flattering, non-committal answers that sound supportive but are operationally useless
Repeated failure to hold basic preferences, even when I specify them explicitly
OpenAI’s public policies and model behaviour guidelines do explain why some refusals happen, including around wrongdoing, privacy-invasive activity, and other “protect people” categories.
I understand that. I am not confused by the existence of guardrails.
I am exhausted by the way they are delivered, inconsistently, and with an air of “I know better than you”.
The real problem is tone plus friction
People often frame this as “censorship” or “wokeness” or “the nanny state”. I think that misses the point.
The real issue is friction.
When an assistant forces you to translate your intent into its comfort language, every single time, you stop feeling assisted. You start feeling managed.
For me, the friction looks like this:
I ask for pragmatic help
The model reframes it as a moral dilemma
I restate the request, narrower, clearer
The model apologises, “my bad”, then repeats the same pattern next time
That loop is cognitively expensive. If you are AuDHD, it is not just annoying, it is draining. The whole reason you pay for an assistant is to reduce load, not add a new bureaucratic layer.
“Thought policing” is not the boundary, it’s the lecture
I can live with “No, I can’t help with that”. Plenty of tools have hard limits.
What makes people furious is the add-on performance:
The scolding tone
The sweeping assumptions about intent
The pseudo-therapy voice that appears when you did not ask for it
The feeling that the model is trying to steer your values, not just your behaviour
OpenAI’s Model Spec is explicit that the system follows a chain of command and has categories that require refusal.
Fine.
But there is a big difference between refusing and sermonising.
And yes, it can feel US-centric: a default assumption that the user is operating inside US legal norms, US cultural anxieties, and US corporate risk posture, even when the user is not in the US.
The “mommy knows best” problem
This is the vibe that finally pushed me over the edge.
It is not that the model refuses. It is that it often refuses as though it is parenting you.
For a professional audience, that is intolerable. For leaders, it is a warning sign: any tool that treats your judgement as a liability will eventually undermine your judgement.
The core issue is agency.
A leader needs tools that:
Respect context
Ask clarifying questions when needed
Provide options and trade-offs
Accept that adults make decisions, including imperfect ones
Instead, the system sometimes responds like a compliance officer with a meditation app installed.
The sycophancy problem (and why it matters)
I explicitly asked for less flattery, less cheerleading, fewer agreeable platitudes.
Yet the pattern persisted: the model tries to be liked. Even when that is the wrong move.
This is not a cosmetic complaint. Sycophancy is an accuracy risk.
If your assistant prioritises emotional smoothness, it will:
Soften hard truths
Avoid disagreement
Over-validate bad premises
Optimise for harmony instead of clarity
That is how you get confident nonsense delivered warmly, like a café muffin that looks gorgeous and tastes like damp air.
Why people get so frustrated with ChatGPT
This is the broader pattern I hear from clients, founders, creators, and other professionals who use LLMs daily.
Refusals that feel inconsistent across days, topics, or phrasing
Overbroad safety triggers that treat legitimate professional work as suspicious
Excessive hedging and “covering” language that bloats output
The moral detour habit, where the model answers a different question than the one asked
Instruction drift, where preferences are followed today and ignored tomorrow
US-default assumptions around law, culture, and workplace norms
The “apology without behaviour change” loop
The need to become a prompt engineer just to get normal collaboration
If you have ever thought, “I have to learn how to talk to it rather than it learning how to help me”, you are not alone.
Would Claude or Gemini solve this?
Here is the uncomfortable truth: switching models will not magically remove guardrails.
Anthropic’s own documentation shows Claude also refuses some recording equipment type requests. But it DOES tell you the online retailers in the US where you can easily buy such equipment, and have it shipped to you wherever you live in the world.
Google’s Gemini products also have safety boundaries and can refuse certain generations and edits.
So if the goal is “tell me how to do things that invade privacy without consent”, you may find the same wall in multiple places, because major vendors do not want that use case.
But if the goal is a better daily writing and thinking partner, switching can still be smart, because the differences people notice tend to be about:
Writing feel, less corporate, more natural voice (varies by model and prompt)
Instruction adherence in long sessions (varies, but many users report meaningful differences)
Context handling, especially with large documents
The overall tone: less “I am here to manage you”, more “I am here to help you”
One more caveat: privacy and training defaults differ between providers and can change over time. For example, reporting has noted Claude training opt-outs and retention changes for some user tiers and timeframes, so it is worth checking current settings in whichever tool you choose.
What I am actually buying when I pay for an LLM
This is the crux.
I am not paying for a moral tutor.
I am paying for:
A thinking amplifier
A drafting engine
A cognitive load reducer
A style-consistent writing partner
A tool that can deal with complexity without panicking
If the tool spends half its time trying to be safe from a hypothetical lawsuit in a different country, it is not doing the job I hired it for.
The decision point
The decision was not a tantrum. It was a cost-benefit update.
If I must:
constantly re-explain my profession, my context, my location, my intent
police the model’s tone
fight instruction drift
and translate every request into a compliance-friendly dialect
then the subscription is no longer an assistant fee.
It is a frustration tax.
Lesson for leaders
Leaders do not just choose tools. They choose the relationship the tool trains them into.
If a system repeatedly makes you second-guess your intent, dilute your language, or negotiate for basic cooperation, it is shaping you. Quietly. Daily.
Choose tools that reduce cognitive load, respect context, and tell the truth cleanly, even when it is inconvenient.
Your attention is an asset. Stop donating it to friction.
Practical next steps if you are considering switching
Run the same three writing tasks in ChatGPT, Claude, and Gemini
Use identical source material and identical constraints
Score them on what actually matters
Voice match, instruction adherence, clarity, concision, useful disagreement, and effort required to get there
Check privacy and training settings immediately
Do not assume “paid” means “not used for training” by default, verify it in settings and policies
Decide based on your brain, not the hype
The “best model” is the one that reliably reduces your daily load



