The last thing I want you to do with AI is to write everything for you (or give it any of your personal data). Ideally, we shouldn’t be using generative AI at all. Who wants the content they consume to be from a word generator?

Don’t get me wrong. This isn’t a post that’s saying “ALL AI IS BAD!”. I’m a former software engineer and current technical writer who covers AI and machine learning models for a real estate tech company. A lot of my job is learning about different models and testing them out so I can write about them.

I’m not 100% anti-AI nor 100% pro-AI. I don’t think anyone is. We all use AI in our everyday lives. Your phone, Instagram, filing taxes, using a bank account, going to the doctor, etc. It’s all powered by AI, but in many cases, a much different kind of AI than generative AI.

That doesn’t mean that other types of AI are ok and it’s JUST generative AI that’s sus. That’s not it at all. It’s all sus. The only thing we can do is be cautious about what we’re using, how it’s using our data, and proceed with caution while we learn more about the tools we’re using (and how we can potentially create more ethical tools).

Anyway….I digress.

Let’s get to what this post is all about: Why generative AI isn’t a better writer than you.

What you’ll learn:

  • What the tech does behind the scenes before, during, and after you prompt it.

  • How ChatGPT is a straight-up liar sometimes.

  • Why generative AI isn’t a better writer than you.

  • How to know if using any sort of generative AI is right for you for what you want to get out of it.

Let’s get to it then!

What ChatGPT models really do behind the scenes

If you’re a writer who has a limited understanding of tools like ChatGPT or AI in general, it can be hard to grasp what’s really happening behind the scenes when you use them.

Generative AI tools like ChatGPT seem like a great way to help you work faster, get more done, and maybe make more money. But things aren’t always what they seem.

You may think ChatGPT is actually “thinking” as it helps you with your writing (or writes for you, which I hope you ARE NOT DOING), but it’s not. It lacks the biological foundations of thought, behavior, and emotion. It doesn’t know what it’s like to LIVE, and it doesn’t have a perspective that comes from a human-lived experience, all of which are required for good writing that actually connects with people.

Instead, it’s a machine learning model called a large language model (LLM) that generates text by predicting the next “token”, which is a unit of text, based on the data it’s been trained on. Different models are trained on different data from different sources, all of which AI companies aren’t the most transparent about.

This data is also mostly stolen and unlicensed, with the excuse that it falls under the fair use copyright exception, something that Sam Altman, the CEO of OpenAI (the company that owns ChatGPT), uses to justify the creation and evolution of ChatGPT and other LLMs. BUT the fact that using content generated from these tools without any modification violates copyright laws debunks his claim. Rightfully so, he’s dealing with numerous lawsuits at the moment, including 51 cases for copyright infringement.

Thanks for reading Writefully Yours! This post is public so feel free to share it.

ChatGPT and other LLMs don’t fact check.

It’s essentially generating words based on its predictions without human understanding, awareness, or intention. There’s a reason I say “human understanding,” though. It’s somewhat programmed to “understand” words, but not in the same way our brains do.

When machine learning models are fed data, they learn patterns and context from that data, and then they apply those to learn relationships, like sentence structures.

This is one of my favorite short videos that explains more about this. It’s a bit more technical, but a great introduction to what these models are ACTUALLY doing (start at the 2:25 timestamp if you’re impatient like me). 👇

But they’re not fact checking when they write those sentences. They’re trying to predict what word you want to see next based on the data it’s been trained on and the patterns it’s learned, not what’s true, and even ends up spitting out plagiarized word salads rife with lies.

Yes, it actually LIES all the time. The tech industry calls this “hallucination”, which is what happens when LLMs perceive patterns and objects that don’t exist.

OpenAI actually admitted that their model hallucinates more than a third of the time. That means that if you use it for 3 minutes, there’s a chance that it was lying for one of those minutes. If you use anything by OpenAI, or any LLM for that matter, make sure you do the extra work on your part to fact check every single thing that it tells you.

I also talked a little bit about this in a past article I wrote for Salary Transparent Street: Will AI Take My Job?

So I shouldn’t use generative AI for my work…at all?

Every time I see the words “should” or “shouldn’t”, I immediately think of this statement I heard from my past therapist years ago: “stop shoulding yourself”.

Say it out loud, and you’ll get it. (It means stop doubting yourself and make a decision that works best for you and isn’t based on some preconceived notion of what you should do.)

If you want to use generative AI for your work in any way, that’s your decision to make. But if you want it to write for you, I don’t know why you would want to give up your skills and expertise to an LLM that will use that data to eventually take those skills and expertise from you anyway.

It’s not like it’s going to do a better job than you.

For example, read the 2 statements below and try to guess which one was generated by AI. Leave your answer in the comments!

  1. Freelancers desperately need more pay transparency to help them determine their market rates, negotiate projects with clients, and get paid more of what they deserve, not what some random business owner thinks they should get paid.

  2. More pay transparency in freelance roles promotes fairness and trust by helping freelancers understand their market value, negotiate confidently, and avoid inequitable or exploitative compensation.

I don’t use LLMs to write anything (but have for research). I never want to get into the habit of relying on any of these tools too much. My usage of them is purely experimental because that’s my job.

I still use search engines more (with Google’s AI mode turned off because it generalizes too much and gives misleading information). Research especially. I mostly like to jump around between search engines, forums, social media, books, case studies, first-hand experience, interviews, and academic papers when I’m in deep research mode. No AI tool can replicate everything I need for that.

Real research is the messy, tedious shit we have to go through to get those authentic ah-hah moments that really pull a story together.

Where to go from here

The only AI tools I regularly use in my business are any that exist in the tools I use in the admin side of things, like automating bookkeeping in a tool like Wave Accounting. I’ll write a post soon about the different tools I use and how I use them. These are mostly workflow-oriented tools.

Until then, I love this list of best practices from The Author’s Guild to follow when using any sort of AI for your writing work.

I won’t judge you for whatever tool you use and how you use it. Whatever you do, do so responsibly.

But I’ll leave you with this one last thought: Writing is thinking. If AI does all your writing, whose thoughts are those?

Writefully Yours,

Daniella 💜

Keep reading