Skip to content

Is ChatGPT as dangerous as some say?

ChatGPT providess answers to all sorts of questions based on being able to string the most likely words together found on the Internet. While it may be useful for writing poems, job descriptions and term papers, it has been terrible when it comes to researching facts. In my experience it’s been so wrong, it may be less of a threat than it seems. After all if a source keeps getting things wrong, its credibility is questioned and that source is dismissed or ignored.

That was discovered in a recent court case as described in the NY TImes. A lawyer cited a series of cases to bolster his arguement during a trial of his client, a passenger, suing an airline for knocking into his knee. When the defense lawyer said he could not find those cases, and the judge asked the suing lawyer for more specifics, it was discovered they were all made up by ChatGPT.

That’s been my experience as well using ChatGPT. Recently doing research on high resolution streaming music services, ChatGPT explained how they dealt with adverse network conditions. The explanation was totally made up, bearing no relation to the facts. It would seem the more people use this product, the more we’ll learn how unreliable it is. That’s certainly what the lawyers learned in this case.

The question is whether ChatGPT can improve or is its design inherently flawed because of its methodology. I’m not an expert here, but once the novelty wears off, will we find it to be just a novelty that we all got excited about, but have come to dismiss its usefullness when it comes to producing accurate research?