Tim Harford (aka the Undercover Economist in The Financial Times) introduces the notion of bullsh*t versus lies in his current article about AI. 

He points out that a liar knows the truth and wants to deceive, whereas a bullsh*tter just wants to persuade you, and is focused on being plausible. ChatGPT is generally highly plausible, even when it is wrong, as the example below shows. 

So ... what this highlights is that we need to think of ChatGPT as a tool to be directed and used judiciously by human minds, not as an artificial colleague to have work delegated to it. There is a lot of exciting chat about how ChatGPT will replace people, however much more relevant is the consideration of the roles that will be created to use and adapt AI tools to do their work ... this is at the heart of the reality of innovation in businesses of all types.

And, as my colleague Francesco Barosi outlined in his recent article, ChatGPT's real value may not be as a product, but as a way to advance the conversation about AI.