By Tess Hilson-Greener
Let’s be honest there’s a strange performance playing out in professional writing spaces right now. One side loudly insists:
This was written by a human - No AI involved.
The other side uses AI tools daily quietly and carefully hoping no one notices.
I see it in publishing, journalism, content strategy, even internal comms. People use tools like ChatGPT, Gemini, or Claude to organise ideas, experiment with tone, or beat deadline pressure but they feel the need to hide it.
As an accredited journalist and someone who actively works across strategy and HR transformation, I understand why.
📕Writing is different.
📗It’s personal.
📘It’s identity-driven.
📙It’s how we show what we know.
So when AI enters the frame, it gets emotional fast.
But we have to ask:
Why Is AI Use in Writing Still Taboo?
We don’t shame graphic designers for using Canva. We don’t discredit course designers for using Articulate Rise. We don’t accuse someone of cheating because they built a slide deck in Beautiful.ai or used Grammarly to check flow.
So why are writers still getting called out for using ChatGPT to structure a blog or clarify a complex paragraph?
Let’s explore where this comes from and what it’s doing to our workplace cultures.
Real Examples of AI Shaming
1.LinkedIn Purity Posts
“This was written entirely by me. No ChatGPT. Just original thought.”
These statements often appear unsolicited. There’s no accusation. No request for transparency.
So why include them?
It’s a quiet dig suggesting that others must be “cheating” if they aren’t declaring their purity.
2.Screenshot Policing
A hiring manager in a writing-heavy role shares a post:
“We’re using AI detection tools now. You’d be surprised how many cover letters aren’t written by the applicant.”
The assumption? Using AI makes a candidate less worthy.
Even if the writing is good. Even if they deeply edited it. Even if they wrote most of it and just used AI to start.
3.Colleague Whisper Networks
Internal Slack message:
“Did you see that proposal? Feels very… ChatGPT, doesn’t it?”
Rather than focusing on the clarity of the idea or the strength of the argument, we critique the process someone may or may not have used.
🔥That’s not collaboration. That’s surveillance.
Why the Shame?
Here’s what’s really going on underneath the callouts and disclaimers:
1.Writing is Identity-Linked
People feel their voice is their value.
So if AI had a hand in shaping it, even if it’s a small one, it feels like giving away part of their identity.
2.AI’s Contribution is Invisible
Design tools show you layouts and changes. AI operates behind the scenes. That makes it easy to misjudge or imagine it’s doing more than it is.
3.There’s No Norm for Attribution
You can say, “I co-wrote this with a colleague.”
But “I co-wrote this with ChatGPT”? That still feels taboo even if it’s true.
4.Fear of Being Discredited
Many fear that if they admit to using AI, people will assume:
They’re not smart
They’re not original
They didn’t do the real work
They’re an impostor
So they hide their process. Ironically, so do many of the people calling them out.
Why This Matters
Shame thrives in silence. And the more we pretend we’re not using AI, the more we reinforce the myth that real writing happens in isolation without tools, structure, or support. That’s a problem.
Because AI can do what no writing coach or manager can do in real-time:
Offer alternative phrasing
Catch bias or lack of inclusion
Suggest flow improvements
Help you start when you’re stuck
It’s not just a productivity tool. It’s a thinking partner — if used well.
✅ What We Should Be Saying Instead
Rather than performative disclaimers like “No AI used,” we should normalise statements like:
“This article was researched and developed with the support of smart tools. I wrote it, I shaped it, and it reflects my voice and yes, AI helped refine the message.”
Or simply:
“Every word has a human fingerprint.”
Because that’s the truth.
AI doesn’t know your values. It doesn’t understand your audience. It doesn’t carry your intent.
You do.
We can uphold high standards and support thoughtful AI use.
It doesn’t have to be a binary.
Final Thought: Let’s Stop Policing. Let’s Start Leading.
In HR, leadership, and learning, we talk about:
Psychological safety
Transparency
Continuous development
But those values need to apply to how we create, not just what we produce.
We don’t need purity tests.
We need purpose.
❌ AI is not the enemy.
🤐 Shame and secrecy about using it? That might be.
Let’s stop pretending we’re not using AI. Let’s start showing how we’re using it well.