Since ChatGPT from OpenAI burst onto the scene in November 2022, the barrage of hyped speculation has been relentless, including overblown claims about its capabilities and dystopian predictions about the threat to our existence.

Unfortunately, it is these extremes that are driving important debates on regulation, ethics, investment, and application (although this is still in its infancy), rather than a more realistic examination of the possibilities.

Instead of just rolling my eyes, I have decided to step back and evaluate generative AI using a perspective that emerged decades before ChatGPT but seems strangely apt. 

German philosopher Hans-Georg Gadamer (1900-2002) published his major work Truth and Method in 1960. He believed that all human understanding comes through dialogue; all understandable reality is communicated through language; and that humanity itself is one big unfinished conversation. He took over 600 pages to say this, but these were his main points. 

The relevance of these ideas to generative AI is obvious in a world of “chat” and “large language models”. 

Gadamer proposed that dialogue between people could result in a “fusion of horizons” in which they discover something new by combining their viewpoints. Furthermore, he suggested a similar fusion could happen when a person views a work of art or reads a substantial text.

I believe that most people would agree they have changed their mind after a conversation with a friend, been moved by an artwork, or challenged by a good book. 

As a result, I want to use these ideas to test the current capabilities of generative AI by asking:

  1. Does interacting with generative AI resemble talking with another person?
  2. Can it create an emotional reaction in us like art?
  3. Can it ask us new questions to challenge us, like a good book?

These are actually more philosophically charged questions than they might first seem. And, if you stick with me, I think you will find some of the answers surprising. I also hope the conclusions will help us make more sane and informed decisions about the future of AI.

There is an academic version of this material available. It is not for the faint-hearted but can be found here.

Otherwise, instalment two will be available next week (unless AI has taken over the world by then, in which case, I was wrong anyway).