The Paranoid Writer’s Guide to AI – Part 1

‘Artificial Intelligence’ is an Oxymoron

One of the distinct advantages of being a creative and a tech head is that when something new and bright and shiny comes along, tech brain can dig in and calmly try to understand things, even when creative brain is busy screaming the apocalypse.

There is no doubt that the latest iteration of natural language AI tools are impressive.

The uncanny ability of tools like ChatGPT to supply intelligent-sounding answers to everyday questions has certainly stolen more headlines and chewed up more hysterical column inches than a certain President.

Something important is getting lost in the noise though—there is nothing artificial about the intelligence behind these tools.

In this part one of a three-part series, I’d like to step back from the bleeding edge and hysterical noise and re-frame AI into a non-technical and very human context.

What is AI Exactly?

Clever as it may sound, an AI is still a dumb machine. Sophisticated and impressive, without a doubt, but still a machine that can only do what it was programmed to do.

The concept of a ‘thinking machine’ is a very old idea.

Before we invented digital computers, machines could only do what we physically designed them to do. Once we had the computer, however, the idea of creating machines that could vary their function based on their programming took hold.

We hadn’t created a machine that could think, but we did create one that could learn.

The problem was, computers could only talk with ones and zeros. Except for quantum computers (they exist, but aren’t very useful at this stage), this is still the case today.

Humans came up with this cool idea that it would be great if we could communicate with computers using everyday language. Natural Language Processing (NLP) was born.

Unfortunately, it wasn’t long before we realised that computers at the time were incapable of interpreting natural language in any useful way.

Fast forward a couple of decades and computers finally got powerful enough to do useful things with NLP. We also learned along the way that the secret to understanding human language was not just understanding the meaning of words, but how those words are used in context.

This was our next hurdle to overcome because computers suck at context.

Despite the uncanny ability of modern AIs to sound like they understand what we are asking them, they don’t have a clue what they are saying. They rely on sophisticated mathematical and predictive techniques to spit out the most likely response to a question.

These responses are not intuition or experience on the part of the AI. They’re actually vast databases of parts of a word called a token. More on tokens soon.

Until 2017, we relied on a kind of neural network to glean the meaning from a phrase or sentence so the AI could give a sane response more often than not. This technique proved quite useful. It’s a technique still used in many day-to-day tools, like handwriting recognition and speech recognition.

The problem with this technique is that with longer statements it has a habit of losing the context.

We tweaked this technique for years, but the real breakthrough happened in 2017 when some brilliant humans at Google invented the multi-head transformer.

Despite sounding like a Hasbro trademark, it provided a way to interpret much longer natural language statements. GPT (and others) were born.

GPT stands for Generative Pre-trained Transformer. The secret sauce is the ‘pre-trained’ bit.

AIs are not intelligent as we understand it. They are word prediction machines that rely on immense token databases to calculate the most likely response to any question. And I do mean immense—GPT-3 can store 176 billion different token relationships (called trainable parameters in tech lingo) built from a database of roughly 500 billion tokens. GPT-4 has 1.8 trillion trainable parameters.

The people behind GPT built these databases by basically scraping the entire internet. The algorithm in GPT then trained itself (with the help of humans) using this vast database to build mega-lists of appropriate responses to just about anything we might ask it.

But there was a problem—GPT is not a human; it is a computer program running on massive computer networks. It doesn’t have a moral compass; it has no sense of wrong or right.

It doesn’t have the slightest clue what it just said to you.

This is because what the program outputs is just a string of bits of words from the AI’s database mashed together in a way the program’s algorithm says has a high probability of forming the words you want to hear. Because they are trained using all the good and bad things we put on the internet, AIs can say extremely hateful and hurtful things. AIs also have a habit of ‘hallucinating’, which is just a nice way of saying ‘totally making shit up’. This should come as no surprise when you understand one of the major sources of training data for GPT was Reddit…

To address this issue, the AI developers started round two of the training. Round two comprised hiring a bunch of contractors to teach the AI the most appropriate response to a set of thousands of questions.

To keep it open and random, the AI developers extracted these questions from the hundreds of thousands of questions actual humans had already asked the AI. They also programmed in a bunch of new rules to stop the AI from providing hateful and hurtful comments.

Which is roughly where we are today.

For the sake of clarity and brevity, I have skipped a lot of detail and taken a bit of creative licence at the risk of offending the tech pedants out there. The point of this intro is to highlight the very human origins of the latest natural language AIs, and their limitations.

Going right back to the beginning—it’s still just a dumb machine.

We have without a doubt created machines that are immensely powerful at learning, but they still can’t think for themselves.

Despite the whole internet of information and a trillion learning opportunities, an AI still can’t outstrip your average four-year-old in knowing it’s not OK to lie.

The robot assembling your car is not wondering what it might get up to on the weekend while it welds.

Ultimately, it is human ingenuity (and hopefully human morality and ethics!) that will decide how far we go with AI.

AIs can already produce pretty decent copy for a range of writing tasks. There is no doubt natural language AIs will get to where they can produce a novel that is a reasonable facsimile of a novel written by a human.

But it will never be a human. An AI can mimic Shakespeare, but it will never be Shakespeare.

This is why, despite the challenges they pose, I am broadly positive about living in a world with AIs that often write better than us.

Humans haven’t been able to beat chess AIs for years, but it hasn’t killed the game of chess. In fact, most serious players now use chess AIs as analysis tools to improve their game.

This is where I see a great opportunity for writers—using these AIs to improve and to grow as writers.

In Part 2 of this series, I will explore some challenges and opportunities presented to writers by the latest natural language AIs.


NOTE: This article is part of a series I originally published with Books+Publishing in 2023. I’ve republished them with minor updates as my position on AI has not changed and the general discussion and advice is as relevant to writers in 2025 as it was in 2023.

Scroll to Top