I have previously argued that even though generative AI is not like a human in dialogue, it has nonetheless crossed the threshold of conversational significance through its applications in therapy and debate. In this week's “Meaningful dialogue after ChatGPT”, I consider Gadamer’s third type of “fusion”, this time with a “text”.

My three previous articles on how we can apply Gadamer’s philosophy to examine generative AI can be found at the bottom of this piece.

Generative AI as “text”

Gadamer proposes that our understanding can be advanced through engaging conversationally with a text. This could be any form of encoded knowledge, but a book is the easiest example to think about. The core idea is that new questions will form in the reader's mind that would not have arisen otherwise.

There are several important similarities between generative AI and a text, including that it has no reasoning, morality, or emotion beyond what is embedded by its author. It may also have limited context, especially when the author is unknown or contested.

Gadamer acknowledged these limitations, saying:

It is true that a text does not speak to us in the same way as does a Thou. We who are attempting to understand must ourselves make it speak. 

He proposed that we “make it speak” through “the logic of question and answer” and going “behind” and “beyond” what is written using our imagination. In this way, parts of a text can mean different things in different situations, even outside anything intended by the author.

This may sound like a form of ventriloquism, but if you have ever read a book that changed you, you will know the experience does not feel that way. Even though all the processes of dialogue are within the reader, the text has a conversational power of its own. This is partly because we experience something of the author mediated through the text, but also because texts can “speak” to us in our situation outside anything the author could know.

Mega-text vs meta-text

This all sounds very promising, especially when generative AI is based on collections of words. Also, given the vast scale of the large language models that underpin generative AI, it might be tempting to characterise it as a mega-text, particularly when the “chat” interface helps the interactive process well beyond a simple read.

Furthermore, generative AI is now capable of adding its own works to the library. ChatGPT has written several autobiographies, one preposterously titled “The Inner Life of an AI: A Memoir by ChatGPT”. I don’t recommend you rush out to buy it (I have), but there will be much more of this to come. We might even think of generative AI as a self-extending text.

However, there is a problem; actually, two problems. We are unable to read ChatGPT’s autobiography on the generative AI platform itself because copyright restricts its use. Even if it didn’t, large language models are not designed for mass storage of whole books – they are not databases.

This undermines Gadamer's whole idea of a text because he believed:

We can understand a detail only in terms of the whole text and the whole only in terms of the detail.

Rather than a mega-text, generative AI turns out to be more of a meta-text with only summaries of major works on offer.

Over time, I also think there will be innovation around these challenges and will speculate on that next week in my final chapter.


Of all Gadamer’s conversational scenarios, generative AI is most like a digitised text. The full potential of this role is restricted by copyright and technical limitations, but this may be temporary. Hence, it is much more productive to think of generative AI as a talkative library than a synthetic person.

Next week, I will draw some conclusions and speculate on the future.

There is an academic version of this material available, but it is a tough read. It can be found here.


More from this series: