In the final chapter of “Meaningful dialogue after ChatGPT”, I will bring my thoughts together to conclude what generative AI currently is and what it could become.

This is the last in a series of five articles, so by all means read it, but you may want to go back to the beginning for context; all articles are referenced at the bottom of this piece.

Generative AI and Gadamer’s three types of conversation

Despite the extraordinary technical advance that generative AI represents, Gadamer’s ideas show that it cannot function as a synthetic human in conversation, whatever clever mimicry it can employ to pass the Turing Test.

This is not because of problems with bias or hallucinations, which actually make it more human-like, but rather a lack of awareness of its context, an absence of reason, morality or emotions, and an inability to “fuse” with anyone or anything because it cannot update itself.

When generative AI is freed from anthropomorphic projections and reimagined as Gadamer’s “other,” it demonstrates a superior ability to manipulate content, coupled with the advantages of being virtual. Perhaps even more importantly, people are building meaningful relationships with generative AI in therapy and debates. Many more examples like this are likely to follow.

However, the most natural conversational role for generative AI is as a “text” because both share similar limitations, and Gadamer proposes a way around them. Current legal and technical constraints prevent this approach from reaching its full potential through the absence of direct access to whole works, but it is still possible to look something up outside the platform in the old-fashioned way!


In its current development stage, generative AI could actually disincentivise human users from more meaningful engagement with people, art, or texts by providing easy access to vast but truncated, modulated, and passive knowledge. This makes generative AI like the fast food of the knowledge world. Quick and easy answers are sometimes a good thing, but it can’t be the only thing in the conversational diet.

The extent of this risk varies according to the setting. Businesses will be least troubled by all this because they make very pragmatic decisions and use a lot of proprietary data that is not restricted by copyright. Generative AI will replace routine human conversation with customers, commoditised report writing, language-based analysis, and knowledge management quickly when the early adopters have worked out what everyone else should do (likely in the next few months).

Generative AI poses the biggest risk to education, where realistic AI-generated assignments that bypass real learning are just the tip of the iceberg. Urgent adaptations are required to make sure people learn to think, question, and engage in broad sources.

Future developments

Some enhancements are easy to anticipate. Links to full works, perhaps provisioned through commercial services like those for music streaming, would greatly enhance generative AI as a “text.” However, the inclusion of whole works directly within an LLM is technically difficult and, therefore, unlikely.

It is also conceivable that statistical learning models used in traditional AI could be integrated into generative AI platforms to enhance reasoning and allow some form of dynamic update. Maybe this is what Meta and OpenAI have in mind for their next versions.

There may even be solutions to add traceability to training data, perhaps beginning with a simple disclosure of sources.

However, the development of moral competence or emotional capacity cannot be envisaged in the foreseeable future, let alone anything approximating to consciousness.

Nonetheless, the discernible increase in sophistication between ChatGPT-2.5 and 4o suggests that advancement will continue to be rapid.


Generative AI is already in extensive dialogue with millions of human users, but mostly about matters of low importance. Examples like therapy will move the technology into the conversation that Gadamer said defines us. As such, generative AI is starting to make its synthetic voice heard in the human conversations that matter most.

Thanks for getting this far. I hope I have managed to shed some new light on generative AI that is fair and balanced. There is no doubt in my mind this is a transformational technology, but it doesn’t need to be made either our saviour or enemy. So far, the truth has always been in between.

Good luck with your own journeys in generative AI. Let me know how they go.

There is an academic version of this material available, but it is a tough read. It can be found here.


More from this series: