ChatGPT—inaccurate and derivative

There’s been lots of excited social media chatter about ChatGPT over the past week or so. And at first glance it’s impressive, certainly better than your average chatbot. It uses artificial intelligence to focus on interactive usability and dialogue.

But it doesn’t take long to realise it has two big problems:

It’s frequently wrong

ChatGPT makes bold assertions that are misleading or just plain wrong, and doesn’t source or cite its claims or reasoning. At best it will apologise for getting it wrong—but it doesn’t learn from getting it wrong. The next person is likely to get the same wrong or misleading answer and may well walk away believing it’s the truth rather than a lie.

ChatGPT gets it wrong—and can’t explain why

It’s derivative

The system uses other people’s work without credit or citation. Even when you ask it to tell you a story, it lifts it from elsewhere, but without saying that it relies on plagiarising other people’s work. Here’s an example from the Playground:

Lifting the plot of Alice in Wonderland without any attribution

“Filled with hype and bravado”

The term “artificial intelligence” has been a problematic and misleading term ever since it was used to sex-up a paper proposal for a 1956 conference. In reality “few fields have been more filled with hype and bravado than artificial intelligence”.

ChatGPT draws false assumptions and gives wrong but very confident answers. It relies on other people’s research and intellectual property without crediting the source. It’s the same problem that has beset other so-called “AI” which consumes other people’s data, research, or creativity and represents it as if it were its own.

But my worry is not ChatGPT in itself. Many of these are wider problems associated with the current state of AI and machine learning. My main concern is people’s reactions, proclaiming it as something profound and useful, as if giving unreliable answers and scraping content from elsewhere is somehow innovative. It’s what overpaid management consultants have been doing for years.

It’s certainly impressive, as I say, and entertaining and fun to interact act with, until you encounter its misleading assertions. ChatGPT is about as reliable as trusting something Donald Trump says, or believing the first “fact” that a random internet search brings back. Which is disappointing, as originally AI aspired to the Turing Test not the Trump Test.

The plagiarism of other people’s work without any credit, and ChatGPT’s frequently unjustified and unevidenced assumptions is simply baffling. These wrong interpretations of source material and the confident misinterpretation and misinformation that it churns out as a result indicates a significant flaw somewhere in its design. It would be useful to include the source texts on which ChatGPT has been trained and which inform its responses. But that would pull back the curtain to reveal the reality behind, and puncture the current widespread hype and popular acclaim.







3 responses to “ChatGPT—inaccurate and derivative”

  1. Is Output of ChatGPT Text a Derived Work? | Avatar

    […] fact, New Tech Observations from the UK (ntouk) seems to have caught ChatGPT lifting the plot of Alice in Wonderland without any attribution.  There are legal issues here that seem to have been ignored in most of the hype, where even […]

  2. […] experiments last year with ChatGPT found it to be wildly contradictory, inaccurate and derivative—and that includes GPT4. Which exactly mirrors my experiences with AI and music those many decades […]

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: