AI-generated image of a robot working as a copywriter.

AI in the language industry

Two years after the AI boom kicked off by the launch of ChatGPT, it’s safe to say that AI is here to stay. And while the initial overecstatic euphoria may have dwindled a bit, AI still is all the rage. Companies and individuals slap it onto everything without ever questioning if including GenAI in a certain workflow even makes sense – and the language industry is no exception. But what can it really do? And where are its limits?

When ChatGPT was launched at the end of 2022, it seemed that literally everybody was blown away by its capabilities and jumped on the bandwagon immediately. It didn’t take long, however, that companies started limiting their use of AI after cases like chatbots going rogue and causing a lot of bad press surfaced.

But while many bigger companies started to be more cautious in the way they implemented AI, especially in the form of chatbots, many other companies (especially in the language industry) and individuals started using it much more frequently and even started replacing skilled workers with ChatGPT and its friends.

While this is already a very questionable practice from an environmental as well as an ethical point of view, there is also another major problem with Generative AI: it doesn’t actually KNOW stuff – everything it does is based on statistics and probabilities.

AI – just an LLM with genius marketing

What the world collectively calls “AI” these days, is actually Generative AI (GenAI). And that is in most cases just a Large Language Model (LLM) with a shiny new branding. LLMs have been around for a while, but calling it Artificial Intelligence makes people think that it’s something new, a smart, thinking entity – when in reality it’s just a model that’s been trained with a sh*tload of text to understand and generate written language.

The underlying principles of this so-called understanding, however, are not actually a comprehension of what’s being said, nor is it based on some kind of logic. An LLM simply crunches the numbers to find out what’s most likely to come next. And since it’s been trained on such a huge amount of text, it has a ton of reference material to make these numbers pretty accurate.

Nevertheless, it’s important to keep in mind that anything AI tells you is solely based on probabilities – these can change if you give the model very specific instructions or ask it to fine-tune an answer, but in the end, all it does is calculate probabilities.

And that’s where the next issue comes in: depending on what exactly you ask and how you frame your question, you can get ChatGPT, Claude and Co. to support literally any point you’re trying to make – which I just tested: I asked ChatGPT (free version) why we’re having a recession at the moment, which it explained, only to support my point that we’re living in a time of strong economic growth 2 minutes later.

Granted, paid versions might be better than that. But let’s be honest: they aren’t THAT MUCH better. Much like that image below that might look fine at a quick glance, the texts AI creates seem to be okay – until you look at them in detail.

Why probabilities struggle with details

Since AI seems to be so good, people tend to grossly overestimate what it can really do – and do well. This is very noticeable in AI-generated images like this: The woods look okay and even the horses seem fine as long as you don’t look at the first horses back legs, but as usual, the devil is in the details.

Some questions come up, like why has the first horse two tails? Why is the first rider sitting on the horse backwards (not to mention that his feet still point forward)? Why does the second rider hold reins when the horse doesn’t have any on their bridle? And what’s with the second rider’s face? Not your best work, FlatAI. Or maybe it is, considering that the picture has been created based on nothing but probabilities.

The imaging AI doesn’t understand that a horse just has one tail, or that you usually sit on the horse with your face to the horse’s head. All it knows (aka the numbers tell it) is that in the images of horses being ridden it’s been trained on, there usually is a tail close to some legs. And a person somehow sits on the horse – and with the many images showing people riding away from the camera, how could it NOT get confused about the way the rider’s facing?

And while you may think that that’s imagery and text is text, this example actually perfectly illustrates the problems that AI-generated copy has. Let’s have a closer look at that, shall we?

Problems with AI-created copy

When it comes to text, it feels really impressive at first what AI can do. You can have conversations with it, you can ask it to rewrite stuff in a different style, you can ask it to write code and of course you can ask it to write your new blog post.

And in a matter of seconds, it does.

And when you skim the copy ChatGPT, Claude, Gemini and Co. create, they look amazing at first. Lots of words, all intelligible, usually no or very few grammar-mistakes. And even when you start reading through it, everything will seem good. At first. But once you start analysing the copy, you’ll notice a number of mistakes such as the following:

  • There is no logic: Just like in the image above, AI just assembles a bunch of stuff that’s been said about your topic before. In that, contradictory statements are mixed together and things are often repeated throughout the text.
  • The facts are all messed up: AI might say X in the first paragraph of your text and then state that Y is true in the third one.
  • There is no depth: With most texts AI generates, you can play a wonderful round of bullshit bingo – all the empty phrases and buzzwords are there, but if you’re looking for true meaning, you’re in the wrong place. Kinda like in politics.
  • There is no variation: Be it for word choice, sentence structure, paragraph structure or even style – everything AI-generated sounds the same since it uses the most likely words. And yes, even when you ask for a certain tone, the variation within that tone is somewhat limited.
  • The output is heavily biased: In the past, the world and its possibilities for the individual person were much narrower than they are today – for example, only women are nurses, only men are doctors and a family always consists of a man, a woman and two children. And while that’s not the reality anymore, these ideas persist in many heads and much of the content we find online. So for the AI that’s the most likely concept and will never show anything else unless specifically asked to.

As you can see, there are some problems with slapping AI onto everything – particularly in the language industry. Not all that’s AI is gold. While there are many things it can do, there are many more that it can’t do (well).

I’m currently working on two more blog articles to explore what AI can help us with in the language industry and where its limits are, one of them will focus on AI in translation and the other on AI in copywriting. Stay tuned. 😉

More posts

You may also like

Outsourcing your website copy – what’s important?
Outsourcing your website copy to a professional copywriter is a great idea – but how can you ensure the texts turn out great?
Building Brand values - Diversity
Building brand values with the help of a style guide
Making your brand values part of your company's DNA can be hard – but a style guide can make it a tad easier.
Image of five very different people sitting on a bench, holding up colourful speech bubbles.
What is a tone of voice and why do you need one?
This article defines the term tone of voice and explains how your company can profit from having its own.
Skip to content