Recently the Guardian used GPT-3, OpenAI’s powerful new language generator, to write an essay from scratch. The impressive thing was that the article was written with no human intervention at all. 

When the article was posted online many people did not believe that it had been written by a machine, and thought it was a hoax! 

GPT-3 was the innovative technology used for this journalist experiment. GPT-3 is a autoregressive language model that uses deep learning to produce human-like text. The latest version has 175 billion parameters (the values that a neural network tries to optimise during training), compared with GPT-2’s already vast 1.5 billion. And with language models, size really does matter.

Exactly how GPT-3 works is not clear. It appears to work by synthesising text from the internet, which allows it to combine millions and millions of snippets of text and then glues them together in weird and wonderful ways.

The model does not stop at just writing online essays. It has been used to generate human-like short stories, songs, press releases, technical manuals, and even computer code.

Yet despite these advancements, GPT-3 is still prone to spewing hateful sexist and racist language. The model requires constant fine-tuning to eliminate this kind of output.

Therefore, it’s no surprise that this latest innovation has re-ignited the debate about true machine intelligence. 

However, this is not a step towards intelligence and fluid machine thought it is the results of excellent (but rigid) engineering. 

The model does not "think" for itself or apply common sense to its output. The results are simply very complex, and clever, cut and paste syntheses of human-written compositions.

We will have to wait a while longer for intelligent, thinking machines.